"Access denied for user" on AWS RDS MariaDB Instance
Hello All, I have a funny issue that is bothering me on a few MariaDB instances. We're running 10.6.14 on AWS RDS with very little changed in the default configuration. We have eight instances of a vendor application running on EC2 Instances that use these MariaDB Instances. This application is deployed and configured via TF/Ansible and we don't really manually login to these instance to change or configure anything. Very occasionally I get an issue with logging into the database. The vendor application is occasionally logging the following errors... Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 125 seconds before retry Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 25 seconds before retry Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 5 seconds before retry Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 1 seconds before retry Of course I recognized these as the classic "wrong password" login error. There are naturally corresponding error log entries in the MariaDB Log... Access denied for user 'user'@'X.X.X.X' (using password: YES) The vendor is a bit dismissive of this, blaming the MariaDB Instance. I'm not certain that is the case. We see perhaps 1 instance of this per day although somedays it doesn't happen and a small number of days we see two instances. It does not appear to happen on a predictable schedule. The MariaDB Instances are working fine at all other times. CPU utilization is very low and we only serve a small number of connections. There is nothing else in the error log of interest. Credentials are set, via Ansible, in configuration files. Details are correct and 99.9% of the time the vendor application has no issues logging into the MariaDB Instance. I can't actually tie these errors to something not working... but we're running very low traffic at the moment, and I'm concerned about it becoming more of an issue later on. Two possibilities exist in my mind... 1. Bug in vendor application is occasionally munging the password and creating the error we see logged. 2. Something else as yet unknown, i.e. rogue cronjob configured to use false vendor app credentials that creates the above error. I don't believe this to be the case but I will not 100% exclude it as a possibility. Is anyone aware of any logging/auditing that I could activate on the MariaDB RDS Instances to get a little more information about this? I'm thinking about setting up an strace on the vendor process but want to see if there are any better options first. Cheers, Rhys
Hi, Rhys.Campbell, How is authentication configured server-side? Just the conventional mysql_native_password plugin? Regards, Sergei Chief Architect, MariaDB Server and security@mariadb.org On Jan 04, Rhys.Campbell via discuss wrote:
Hello All,
I have a funny issue that is bothering me on a few MariaDB instances. We're running 10.6.14 on AWS RDS with very little changed in the default configuration. We have eight instances of a vendor application running on EC2 Instances that use these MariaDB Instances. This application is deployed and configured via TF/Ansible and we don't really manually login to these instance to change or configure anything. Very occasionally I get an issue with logging into the database. The vendor application is occasionally logging the following errors...
Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 125 seconds before retry Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 25 seconds before retry Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 5 seconds before retry Connect failed to database (db): Access denied for user 'user'@'X.X.X.X' (using password: YES) - waiting for 1 seconds before retry
Of course I recognized these as the classic "wrong password" login error. There are naturally corresponding error log entries in the MariaDB Log...
Access denied for user 'user'@'X.X.X.X' (using password: YES)
The vendor is a bit dismissive of this, blaming the MariaDB Instance. I'm not certain that is the case. We see perhaps 1 instance of this per day although somedays it doesn't happen and a small number of days we see two instances. It does not appear to happen on a predictable schedule. The MariaDB Instances are working fine at all other times. CPU utilization is very low and we only serve a small number of connections. There is nothing else in the error log of interest. Credentials are set, via Ansible, in configuration files. Details are correct and 99.9% of the time the vendor application has no issues logging into the MariaDB Instance. I can't actually tie these errors to something not working... but we're running very low traffic at the moment, and I'm concerned about it becoming more of an issue later on.
Two possibilities exist in my mind...
1. Bug in vendor application is occasionally munging the password and creating the error we see logged. 2. Something else as yet unknown, i.e. rogue cronjob configured to use false vendor app credentials that creates the above error. I don't believe this to be the case but I will not 100% exclude it as a possibility.
Is anyone aware of any logging/auditing that I could activate on the MariaDB RDS Instances to get a little more information about this? I'm thinking about setting up an strace on the vendor process but want to see if there are any better options first.
Cheers, Rhys
participants (2)
-
Rhys.Campbell@swisscom.com
-
Sergei Golubchik