MySql slow data fetching

6.1k Views Asked by At

I have MySql table with a huge ammount of rows. I m doing the simple request and it works pretty fast. The only one problem fetching the data to client is terribly slow for about 150 secs when the request itself takes only 0.3 sec.

I am using php 5.3 + ZendFramework on client and Amazon EC2 MySql instance. PHP code and MySql are located on the different servers.

Can anyone tell me how to increase the data fetching speed?

3

There are 3 best solutions below

3
On

Based on the comments you eventually provided the query that you are running.

select * from table

You also said that you have "a huge ammount of rows". This should be simple enough for you to understand why it takes forever to transfer to the box querying your database.

And also, just because your query runs fast, you still have to deal with network limitations. Plus, once you actually receive the result, hypothetically, if you were using an ORM for instance, you could find that hydration of the result set adds complexity and time to your request.

Overall, you answered this question yourself. Don't do a select * from table where the table is "enormous".

You should look at adding limits and offsets to your query and only select the rows that you actually need.

Also, "huge" is extremely subjective. To me, a large table is about 10million rows. But to someone else 10million is pretty small. I wouldn't even like to put a number on what a huge table might be.

0
On

The slowness is because you retrieve all the data from the table and since you have two different servers for your database and website all the data needs to be transfered throught the network will introduce additional slowness.

If you ask why it is faster when you run in your client, it is because most of the sql clients adds limit to the number of rows automatically.

3
On

As all comments are saying: you'll have to provide some more info to get an exact (or at least less vague) answer. But a few rules of thumb you should keep in mind for every query you write:

  • Avoid SELECT * whenever possible, especially things like SELECT COUNT(*), only select those fields that are of interest to you.
  • Split your data over cleverly indexed tables and use JOIN. A good JOIN can boost performance, often quite dramatically.
  • Use EXPLAIN EXTENDED, avoid situations where MySQL has to create a temporary table on disk
  • When using EXPLAIN, also check for implicit CAST's, or COLLATION conversions. Unlike PHP, an UNSIGNED INTEGER is not the same as a VARCHAR. Think of it as $x = 123; followed by if ($x === '123') which will be false: a string !== an int...
  • Avoid large data-fields (like MEDIUMTEXT) as they will always result in disk access
  • Avoid LIKE as much as you can, and especially when you're using wildcards (x LIKE 'y%z')
  • Remember: % isn't the only wild-card: _ is a sort of joker for a single char
  • You have an index, but could you do with more indexes, without getting silly? Don't index too little, and don't index too much
  • Have you chosen the right storage engine?
  • EXPLAIN, EXPLAIN and EXPLAIN again...
  • If you have to use a slow query, cache the results
  • LIMIT-ing the results for "huge" data-sets is a must. If you're fetching vast lumps of data, what are you doing with them? I can't possibly conceive a situation where all data is going to be sent to the client as one big lump, is it?