Direct naar content

 Performance Tuning

Performance tuning is a much-discussed topic. In many cases it is the first issue brought up by new customers. The question or complaint is usually about the speed of the database.

It would be a logical step to take action. In order to achieve visible improvement ad hoc and on a short-term basis, expanding resources is often seen as the solution. And, with databases in the cloud and auto-scale, or “fully managed”, these resources are scaled up for you entirely automatically. The only thing that is not managed are the costs.

Because that is, after all, the revenue model of today’s popular DBaaS solutions. All the more reason to take a pro-active look at database performance.

Edco Wallet

Co-Founder & eigenaar
Edco Wallet - Co-Founder & eigenaar

Short-term solution

A logical first step to realize quick improvements is to expand resources, CPU, RAM, etc. However, this can also be a cloud costspitfall because in your own environments (onpremise, hosted, data center, private cloud), hardware expansion can also have consequences for your software licenses.

So, apart from extra costs for resources, you also end up paying considerably more for your licenses, which are often based on x amount per core or per CPU. A greater “risk” with your database in the cloud and for example with auto-scale options is that the cloud provider has already scaled up resources for you.

That’s what they mean with “fully managed”. And the financial impact can be significant. One startup experienced that in March 2020 during an internal test with Google Cloud Run + firebase on GCP. See image.


What cloud providers mean by “fully managed” is often misunderstood. They don’t mean that there is a Data engineer on the other end who has an opinion about the data model, your application linkage and I/O usage, who does database maintenance and gives advice on indexing strategy.

By “fully managed”, they mean that your database is always available and automatically scales with growth or usage. And they solve that by adding extra resources. That’s also the revenue model of mainstream DBaaS solutions.

The providers and DBaaS suppliers have no interest in an efficiently designed and actively managed database environment. Cloud databases also deserve the necessary attention. Especially to achieve and maintain the most desirable balance between high performance and cost.

Proactive tuning

If we know the five most important causes for poor performance, we can anticipate them and avoid any delay or unnecessary scaling of resources.

The most common causes are:

  • Growth of database and data traffic
  • Inefficient data model
  • Lack of (good) database maintenance
  • Ignoring updates and patches
  • Changing conditions in the environment
    • Moving the database to the cloud
    • Application update
    • Hardware
    • Etc

The last two causes are relatively logical and clearly identifiable when they occur. In this blog, we’ll take a closer look at the first three issues.

1. Growth of database and data traffic

Due to increasing usage, storage of data, growth of users and a more extensive range of applications, the database will carry more and more load . It is important to discover which part of the database carries the heaviest load.

2. Inefficient data model

We have written a blog before about how the influence of data modeling is often underestimated. Issues such as indexing strategy, data types and column order should be in line with the actual load of the database.

Many databases are still often delivered without a solid indexing strategy, or the indexing has become outdated. In those cases, it’s recommended to implement an indexing strategy or to reorganize the current indexing. The choice of data types should be made carefully. For example, when house numbers need to be stored, a smallint or int usually suffices and a bigint would be a waste of space.

The order of columns or fields can also have a substantial impact on the overall performance of the database. The fact that the specifics of columns (mandatory/non-mandatory) and data type (fixed/variable width) affect the actual storage on the data pages, is often overlooked.

Regular maintenance will also need to be done on the indexing. Refreshing statistics, reorganizing or even deleting the indexes and recreating them where necessary. Tables that are frequently mutated will benefit greatly from this.

3. Database maintenance

When database maintenance is overdue, current resource usage will not be representative of what the configuration and hardware could be capable of. Performing insert, update and delete statements on tables will cause indexes to become fragmented.

This means that space is reserved for an index but there is no longer any useful information in that location. Does this phenomenon only occur on certain database platforms? No, on all platforms. MySQL, MariaDB, Oracle, Sybase, SQL Server, PostgreSQL and even NoSQL variants such as MongoDB. The solution varies by platform.

Usually standard features are available but often they are drastic and block normal database processes, such as storing new records. However, there are ways, with and without third-party tooling, to minimize or even prevent those drastic actions.

To measure is to know

Performance tuning starts with measuring. It’s essential to monitor the database environment well in order to be able to put your finger on the sore spot or spots. In addition, through structural monitoring you can make comparisons that reveal changes in the environment.

Finally, the log file of the database can contain messages that require action. It is wise to inspect the logfile on a regular (daily) basis in order to identify possible problems in time.

Which things should you measure as a minimum?

Performance counters (Windows) or PCL/Perf (Linux)

Use these tools to collect system data about CPU, Memory and disk usage. This helps to collect data over time and to get a better understanding of what happened instead of what exactly is happening right now. This way trend analyses can be done and pain points can be traced more easily.

CPU usage

CPU usage looks at the workload on the database environment. Important information can be obtained from this, such as the differences between certain periods. Does almost nothing happen during the night and a lot during the day? Or do we hardly see any difference?

Memory usage

Read and write actions go through Memory (RAM). There they build up a buffer. If there is a high load on a system, is there enough memory to carry the load?
Is there a lot of reading, or mainly writing? Are there any peak moments during the day? Is that peak the cause of a slow system at that moment? Part of the answer to these questions can already be found in the data of the memory usage

Disk I/O

This information is important to investigate whether there is a delay in reading and writing, how much is written and read. Underlying causes can be bad disks, inefficient disk performance, reduced I/O from applications and poor performance queries.

Additional data

Of course you can collect additional data that gives you more in-depth information about e.g. queries, deadlocks, transactions, buffers etc.

Grafana and Prometheus are building blocks to set up monitoring. A monitoring solution is created by scraping exports in Prometheus and adding dashboards to Grafana. Those exporters and dashboards can often be found on GitHub, as well as various plugins.

There are also ready-made (open source) monitoring solutions available on the market for different types of database engines (PostgreSQL, MySQL variants, MariaDB, but also DBaaS like Aurora, RDS and Azure SQL database), such as Percona Management and Monitoring

DBaaS restrictions

In a DBaaS situation you will often have no access to OS. And you will have to be creative to distil enough information from your environment to be able to act on it. But also from DBaaS engines you can monitor a lot of activities on which you can react proactively.

Efficient database administration

We live in a world that is increasingly driven by data. Good management of your data platform (in the cloud) is therefore becoming increasingly crucial. A good database expert knows exactly how the latest tooling works, feels comfortable working both in open source and in licensed database environments and enjoys solving complex database puzzles. But what is the best way to achieve effective database administration? We have written a white paper to give you some pointers. Download our whitepaper here.

Do you want advice on how to stay ahead of performance problems? Want to know more about a maintenance plan or indexing strategy? Do you want to choose a DBaaS solution but you don’t know yet what the best choice is? Feel free to contact us!