Triggers? Yes. I ran a system like this using triggers for all insert/update/delete records replicating data between MySQL and MSSQL. A PHP script was used to copy the changes between systems. Sounds a little nuts but it worked very well for a smallish (< 10GB) dataset. Performance_schema -> Just a thought really. I wouldn't do this. Not sure it's viable. But I was thinking of the events_statements_summary* tables. -----Original Message----- From: Erik Cederstrand [mailto:erik@cederstrand.dk] Sent: Wednesday, April 13, 2016 11:27 AM To: Campbell Rhys, PMK-ACS-QPM <Rhys.Campbell@swisscom.com> Cc: maria-discuss@lists.launchpad.net Subject: Re: [Maria-discuss] Monitoring InnoDB tables? Hello Rhys, Thanks for your suggestions!
Den 13. apr. 2016 kl. 09.48 skrev Rhys.Campbell@swisscom.com:
Triggers?
You mean create a custom logging table in the customer database? I guess I could do something like this on each table: CREATE TRIGGER insert_event AFTER INSERT ON t1 FOR EACH ROW INSERT INTO event_log SET table_name = 't1', count=1 ON DUPLICATE KEY UPDATE count = count + 1;
Monitor the binlog, as you state, might be a goer.
Yes, but I'd like to know which schema and table changed, not just that *something* changed somewhere on the server. Otherwise I have too much work to do.
You could enable the performance schema and monitor queries there?
I tried to enable the performance_schema. Any pointers to where I should look? Poking around, my best bet right now would be: SELECT object_schema, object_name, sum_timer_write_allow_write FROM table_lock_waits_summary_by_table WHERE object_schema='test' AND object_name='t1';
If you cannot change the source database at all how about an additional slave that you can change? I think I'd go for this option.
I could try that. Erik