-
Committer:
Mark Callaghan
-
Date:
2011-09-04 17:52:17 UTC
-
Revision ID:
mdcallag@gmail.com-20110904175217-o3du5ga4fevym76j
Detect disconnected clients while waiting in admission control
This uses vio_is_connected to detect clients that disconnect while
waiting to run in the admission control queue. It also checks for thd->killed.
The checks are made once per second (configurable via the my.cnf variable
check_client_interval_milliseconds). This replaces the standard method of
using enter_cond/exit_cond because enter_cond doesn't detect disconnected
clients. The cost of this change is that clients don't immediately respond
to kill. That takes up to check_client_interval_milliseconds.
This also removes calls to admission_control_enter/exit when temp tables
are converted from Heap to MyISAM. I don't want a misbehaving account to
exceed max_concurrent_queries with 1000 threads doing that conversion.
Removed the unused "wait_seconds" arg from admission_control_diskio_enter
and changed the "wait_seconds" arg in admission_control_enter to a flag.
There is no wait timeout. A query will block in the AC queue until it can
run, is killed or the client disconnects. This matches existing behavior
excluding the waiting in queue.
This adds Control_admission_waits and Control_transaction_failures to
SHOW GLOBAL STATUS to count the number of queries that wait and fail
for max concurrent queries and transactions.
This changes the ifdef DEBUGGING code to use ifndef DBUG_OFF so that it
is always on for debug builds.
Perf results below show that results are OK (ac32 matches tc32).
tc0 - innodb_thread_concurrency=0
tc32 - innodb_thread_concurrency=32
ac32 - admission_control enabled with max_concurrent_queries=32
Throughput for 8 -> 1024 clients doing update 1 row by PK via sysbench oltp.
Database was cached.
8 16 32 64 128 256 512 1024 clients
29568 44313 48509 49062 47478 43170 31700 15275 tc0
29510 45301 48730 49783 49426 48263 48229 48449 tc32
29367 44617 49011 49313 48345 48589 48494 48405 ac32
Throughput for 8 -> 1024 clients doing the read-only transaction workload
for sysbench oltp. Database was cached.
8 16 32 64 128 256 512 1024 clients
1827 3108 5806 4313 4045 3969 3898 3777 tc0
1817 3090 5613 4424 4003 3935 3865 3857 tc32
1833 3134 5824 4438 4037 3938 3860 3818 ac32
These results are from the most recent changes
Throughput for 8 -> 1024 clients doing update 1 row by PK via sysbench oltp.
Database was cached. And the binlog was disabled.
8 16 32 64 128 256 512 1024 clients
26913 35439 34517 32256 35234 31146 23088 12112 tc0
26488 36195 34900 34939 35756 35566 37091 36213 tc32
26497 36275 33807 34418 36726 35965 34030 34112 ac32
Throughput for 8 -> 1024 clients doing update 1 row by PK via sysbench oltp.
Database was cached. And the binlog was enabled.
8 16 32 64 128 256 512 1024 clients
4581 4755 4724 4708 4687 4641 4372 4226 tc0
4566 4805 4821 4709 4634 4577 4354 4316 tc32
4621 4759 4737 5083 5042 5013 4973 4910 ac32
Throughput for 8 -> 1024 clients doing fetch 1 row by PK via sysbench oltp
Database was cached.
8 16 32 64 128 256 512 1024 clients
27791 54773 113667 76263 75341 72917 71721 38484 tc0
28488 57701 113709 78412 73952 72464 71941 68381 tc32
28038 56769 115275 77910 74220 73294 72154 73250 ac32
Throughput for 8 -> 1024 clients doing the read-only transaction workload
for sysbench oltp. Database was cached.
8 16 32 64 128 256 512 1024 clients
1704 2901 5231 5026 3870 3804 3763 3624 tc0
1672 2834 4983 5430 3832 3798 3633 3837 tc32
1699 2822 5251 5277 3983 3855 3830 3690 ac32