DELETE IGNORE on Tables with Foreign Keys Can Break Replication
Category:
STOP: DELETE IGNORE on Tables with Foreign Keys Can Break Replication - MySQL Performance Blog
DELETE IGNORE suppresses errors and downgrades them as warnings, if you are not aware how IGNORE behaves on tables with FOREIGN KEYs, you could be in for a surprise.

How to recover a single InnoDB table from a Full Backup
Category:
How to recover a single InnoDB table from a Full Backup - MySQL Performance Blog
Sometimes we need to restore only some tables from a full backup maybe because your data loss affect a small number of your tables. In this particular scenario is faster to recover single tables than a full backup. This is easy with MyISAM but if your tables are InnoDB the process is a little bit different story.

SQL Generator for testing SQL servers
Category:
SQL Generator for testing SQL servers (MySQL, JavaDB, PostgreSQL) in Launchpad
This project implements a pseudo-random data and query generator that can be used to test any Perl DBI, JDBC or ODBC-compatible SQL server, in particular MySQL, but also JavaDB and PostgreSQL.

A day in the life of a slow page at Stack Overflow
Category:
A day in the life of a slow page at Stack Overflow
Tuning SQL is key, the simple act of tuning it reduced the load time for the particular page by almost 50% in production. However, having a page take 300ms just because your ORM is inefficient is not excusable. The page is now running at the totally awesome speed of 20-40ms render time. In production. ORM inefficiency cost us a 10x slowdown.

PHP implementation of the MySQL old_password function
Category:
PHP implementation of the MySQL old_password function
MySQL has a built in function called password that calculates the hash of a password for secure storage in a database. In MySQL versions older than 4.1 the hashing function was very basic so all newer versions uses the cryptograpichally secure SHA-1 hashing algorithm

How to calculate a password hash the 'old way'
PHP: Transparent load balancing and sharding with mysqlnd
Category:
PHP: Transparent load balancing and sharding with mysqlnd
30+ lines of PHP to add round-robin connection load balancing and, 70+ lines of PHP to add MySQL master slave replication or sharding support to your application without changing the application

Thinking about running OPTIMIZE on your Innodb Table ? Stop!
Category:
Thinking about running OPTIMIZE on your Innodb Table ? Stop! | MySQL Performance Blog
Innodb/XtraDB tables do benefit from being reorganized often. You can get data physically laid out in primary key order as well as get better feel for primary key and index pages and so using less space,
it is just OPTIMIZE TABLE might not be best way to do it.

MySQL Limitations
Category:
MySQL Limitations Part 3: Subqueries
The following query will surprise users unpleasantly:

select * from a where a.id in (select id from b);

Users expect the inner query to execute first, then the results to be substituted into the IN() list. But what happens instead is usually a full scan or index scan of table a, followed by N queries to table b. This is because MySQL rewrites the query to make the inner query dependent on the outer query, which could be an optimization in some cases, but de-optimizes the query in many other cases.

Using MySQL as a NoSQL
Category:
Yoshinori Matsunobu's blog: Using MySQL as a NoSQL - A story for exceeding 750,000 qps on a commodity server
We are using "only MySQL". We still use memcached for front-end caching (i.e. preprocessed HTML, count/summary info), but we do not use memcached for caching rows. We do not use NoSQL, either. Why? Because we could get much better performance from MySQL than from other NoSQL products. In our benchmarks, we could get 750,000+ qps on a commodity MySQL/InnoDB 5.1 server from remote web clients. We also have got excellent performance on production environments.

Scaling writes in MySQL
Category:
Scaling writes in MySQL
After partitioning, tests showed that we could sustain an insert rate of 10K rows per second for some time. As the table size grew past 10 million records, the insert rate dropped to about 8500 rows per second, but it stayed at that rate for well over 44 million records. I tested inserts up to 350 million records and we were able to sustain an insert rate of around 8500 rows per second. Coincidentally, during Michael Jackson's memorial service, we actually did hit an incoming rate of a little over 8000 records per second for a few hours.