TokoDB is worth a shot certainly if your working set is disk bound. Partitioning might also be an option for you.

 

From: Maria-discuss [mailto:maria-discuss-bounces+rhys.campbell=tradingscreen.com@lists.launchpad.net] On Behalf Of Roberto Spadim
Sent: 13 May 2015 17:39
To: Maria Discuss
Subject: [Maria-discuss] doubt about best engine

 

hi guys, i'm with doubt about the "best" engine (best = no INSERT lag ~0.100s or less, good SELECT speed ~ 1 to 10 seconds is ok)

 

i have two tables, they only "receive" INSERT and SELECT queries (no DELETE/UPDATE/ALTER)

they increase 200MB/day and 3.000.000 rows/day

 

my doubt is, how to have small database size and good read rate?

it have ~ 104 inserts / second, but each insert is multi values like

... INSERT INTO table VALUES (),(),(),(),(),() ....

 

in other words... today i'm using aria (it's crash safe) and i don't have problems with concurrent insert / table locks etc, my doubt is about table size and read speed with bigger database

 

i'm considering using spider engine or any other shard system with >100GB , or maybe a big raid-6 system,

server is a xeon 8core with 16GB and 2hd 500GB RAID1 SAS (>180MB/s)

 

the client side is a C++ program that i don't have access to change and recompile it, in other words i can only change server side and load balance

 

data table is something like:

(datetime decimal(22,6),value decimal(22,6))

 

the SELECT code use most part of recent data ~5 DAYS, in outlier queries something near to ~1 MONTH ago

 

ideas are wellcome

 

--

Roberto Spadim