I would ask if there is a benchmark report for running sql against Ignite cache.
I looked through the https://apacheignite.readme.io/docs and didn't find one.
Thanks.
GridGain publishes some benchmarks: https://www.gridgain.com/resources/benchmarks/gridgain-benchmarks-results
However, it's not clear what numbers are you looking for. Everything depends on your use case and on what you compare with. This is especially true for SQL queries.
Related
If in an interview I am asked As a DB2 DBA how would you approach a job or a query which is consuming more time than normal? Which commands would you use and what all steps would you take to resolve it?
If it is a job I would use db2mon.sh as a starting point, if it is a query I would first try db2advis to see if it recommends any indexes.
The Db2 Knowledge Center has a section on "Troubleshooting" which includes Troubleshooting Db2 Servers and there is a section on Troubleshooting SQL Performance which mentions db2mon.
There is also a section on Performance Tuning. As a DBA, it is worth reading all these sections (at least once anyway).
I've just inherited an old PostgreSQL installation and need to do some diagnostics to find out why this database is running slow. On MS SQL you would use a tool such as Profiler to see what queries are running and then see how their execution plan looks like.
What tools, if any, exist for PostgreSQL that I can do this with? I would appreciate any help since I´m quite new with Postgres.
Use pg_stat_statements extension to get long running queries. then use select* from pg_stat_statements order by total_time/calls desc limit 10 to get ten longest. then use explain to see the plan...
My general approach is usually a mixture of approaches. This requires no extensions.
set the log_min_duration_statement to catch long-running queries. https://dba.stackexchange.com/questions/62842/log-min-duration-statement-setting-is-ignored should get you started.
Use profiling of client applications to see which queries they are spending their time on. Sometimes one has queries which take a small duration but are so frequently repeated to cause performance problems.
Of course then explain analyze can help. If you are looking inside plpgsql functions however, often you need to pull out the queries and run explain analyze on them directly.
Note: ALWAYS run explain analyze in a transaction that rolls back or a read-only transaction unless you know that it does not write to the database.
I'm looking for som testing programs for a firebird db server, to stress-test and find bottlenecks.
Something like DB-Optimizer, only for firebird would be nice.
Can also be separate programs, one for stress-testing and one for profiling.
You may look :
sinatica monitor
IBSurgeon tools
IBExpert
and you can also just request monitoring tables (Firebird 2.1)
Just complementing Hugues answer:
Sinatica monitor uses monitoring tables from Firebird 2.1, so will not work very well on FB2.0 or below.
IBExpert have his own type of monitoring tables to help you in this task.
If you are using FB 2.5 you can look also Firebird Trace Manager.
To make some stress tests using your own application you can always look for AutoHotkey. Just be careful... Sometimes you can really mess things with it.
Jmeter may work rather well, as it's fairly easy to setup a basic DB request test, and can be incredibly flexible.
Overview
Manual
You may look at Firebird profiler to trace the queries
Does anyone know if it's good solution to use SQLite in multi-thread environment.
I want to replace SQL Server with more simple and built-in database as there is no need to feed such big server DB. The supposed max size of DB would be 4 gigabyte after 4-5 years of usage. Is it normal for built-in DB? Could it affect performance?
It depends on the type of queries you would use. If the queries are simple selects with plain joins, then SQLite could do fine but I think you would still be better off with e. g. Firebird 2.5 when the stable release gets out (RC3 is available now). You would have somewhat richer SQL to work with. I don't know how much bulk loads are important for you, but neither SQLite nor Firebird are very strong in this area. If you need good bulk insert performance and low cost, then you should look at PostgreSQL or MySQL. There is also a very interesting looking database I happened to stumble upon recently called CUBRID. I have only installed it so far, so I can't tell how good or bad it is but it certainly seems worth a look.
You might also want to look at this wikipedia article:
http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems
I don't know which distro you're talking about here. I've only used SQLite.NET and I know it works well on multithreaded applications.
It can also be deployed on client-server systems so you need not worry at all.
Considering Vinko's statement of 'real' databases, you can ignore him. SQLite is really worth it's salt.
If you're working with .NET, you might find this link useful:
http://sqlite.phxsoftware.com
According to the documentation SQLite is thread safe but there are caveats.
You can use SQLite in a multithreaded environment, but if and only if you build a special version of it (and find out if the library you'll be using it supports it and tweak it if it doesn't.) So, assuming your library supports multithreaded SQLite, if you really need a high level of concurrency to the database you may prefer to use a 'real' database. Be it MSSQL or any other falls out of the scope of the question.
Consider MySQL and SQL Server Express, for example.
If your concurrency level is low, SQLite can cope with it.
I would also suggest you to take a look at the CUBRID database. It has nice optimizations for Web applications and it is easy to learn.
I wonder if some open-source SQL database servers have a possibility, how to find out (maybe even in graphical representation), what actually happened inside during the query (e.g. whether table scan was used, or if and which index(es) were used..) step-by-step. It would be useful for database optimization.
Most servers have some sort of way to display a query execution plan. Explain query in mysql, for instance. Which server are you using?
Most all will have tools/commands to describe query plans,
the graphical part you may have to pay for.