Direct naar content

Who will join us next year for Data Saturday

Knowledge development is paramount at OptimaData whether it’s technical knowledge or broader personal development.

That’s why I went to Data Saturday last October 5. This event used to be called SQL Saturday, one of the top eight database conferences. But what do you really take away from that and why is such a conference useful?

Data Saturday

We are now over a month away – but still the things I saw and heard there are playing through my head and hands.

Therefore, in this blog, I’d like to take you through the experiences and insights I gained at the Data Saturday Conference.

The conference consisted of a pre-conference day with some deep-dive training and the conference on Saturday which was held at the Pathé cinema in Leidsche Rijn. Spread throughout the day, six different rooms hosted hourly talks of about 45 minutes each.

With OptimaData, we were well represented, as there were five of us in attendance. This allowed us to attend several sessions, both together and individually. Afterwards, this provided a lot of material for discussion about the sessions we attended together. Knowledge was also transferred about sessions we attended separately.

De rol van de DBA in DevOps

Recurring topics during Data Saturday were: Automation, Containers and DevOps. Well-known names like Hamish Watson and John Martin gave extensive workshops on Friday at the pre-conf workshops. Both gentlemen are involved in the Microsoft Data Platform MVP. But in addition to these gentlemen, there were also a number of talks on Saturday by William Durkin and Sander Stad.

The topics of Automation, Containers and DevOps fit very nicely with OptimaData’s vision of database management and the role of database administrator which is increasingly changing.

Many customers expect the current database administrator to have knowledge related to containers, DevOps and automation. This workshop matched this need very well. Why?

In the past, there was often a tension between the needs of the development team and those of the DBAs. The developers want to develop quickly and the DBAs are responsible for the operational environment; it must remain stable and perform well.

New features or releases from the developers make the database (and thus the application) slower and sometimes something even breaks down. Things that don’t make a DBA happy. With the introduction of DevOps (a combination of Development and Operations), all these problems should be solved. After all, one becomes one team and has the same goal in mind.

As a result, the database becomes directly part of the development process and the proactive DBA tasks fit within the CI/CD pipeline, making releases faster and more efficient.

Automating rollout

In the talk “Automation for the DBA: embrace your inner sloth,” William Durkin took us through his views on how lazy you should be as a DBA.

My expectations for this talk were pretty high, because I see the change in the DBA profession that is happening right now, where there is a lot more automation and

the role of the DBA is changing from someone who keeps things running, to someone who automates the deployment of database platforms.

I do agree with William’s view that DBAs should automate repetitive tasks, using standard tooling that is already out there as much as possible. After all, it’s a waste to have to reinvent the wheel multiple times when someone else in the community has already done it. A great example of this is dbatools a collection of Powershell scripts specifically focused on SQL Server.

Tooling around automation and DBA DevOps

To make your job as a DBA in a DevOps environment easier, there are numerous tools available. What tooling was discussed at the conference?

Testing with tSQLt, the tool

A key pillar of automation is automated code testing.

With a tool called tSQLt, it becomes possible (even for a DBA) to test changes to a database and be able to see their impact on the operation of an application.
tSQLt is a framework that can be used to do unit testing. Unit testing is testing functionality based on the smallest possible units. For example, testing a stored procedure, function, data read – and adding or removing data.

It is possible to perform a wide range of tests in a very short period of time and thus detect errors after changing software.
Hamish on Friday and Sander on Saturday introduced us to the capabilities of tSQLt in a series of demos.

They showed how unit tests could test both right and wrong situations. We even saw how all manually created unit tests could also be generated using a Powershell module that Sander had written.
This kind of initiative makes it a lot easier to start creating unit tests by not having to write all the unit tests yourself.

Powershell Desired State Configuration

John Martin told us in the pre-conf workshop about the capabilities of Powershell Desired State Configuration (DSC). This is a framework that can be used to set up an IT infrastructure.

With Powershell Desired State Configuration, changes can be controlled and even restored the moment there are major deviations from the pre-specified structure.

So with Powershell DSC, it is possible to automate the installation of SQL server instances in a unified way. John also showed us a few examples where, with Powershell DSC scripted, he was able to deploy multiple servers that looked exactly the same.

He also talked about the ability to report on deployed environments, showing how “compliant” these environments are.


John also showed us how to use the Terraform tool to very easily deploy a web application, including the underlying database and deployments from source control (github).

In the past I have had some experience with Terraform at one of our clients, so it was very interesting for me to see how John approached this.

One of the nice features I didn’t know yet was Terraform’s ability to show relationships between objects in a graph. This provided not only clarity in relationships, but also a piece of documentation. Something that would normally have to be created by hand and takes a lot of time.


Finally, on Friday, Hamish also gave us a brief demo on using Docker containers in conjunction with SQL server. Since the RC candidate of SQL Server 2017 was released, it became apparent that Microsoft was not only supporting the Windows operating system with its database platform, but also Linux and Docker containers.

With this, Microsoft has broken new ground, which I think is very interesting. With the ability to run SQL Server on Linux or in a Docker container, all sorts of interesting possibilities emerge. What those possibilities are? I will elaborate on that in an extensive blog series in December.

Want to know more about DataSaturday Workshops?

I thoroughly enjoyed Data Saturday workshops.

It was two beautiful venues, everything was arranged to perfection, the topics and speakers were very diverse – and there was also plenty of opportunity to speak with other people in the profession.
Want to know even more about what we at OptimaData saw, heard at Data Saturday 2019?

Then contact us and join us next year!