Deployment of Microsoft SQL database with Azure DevOps
Hello, data folks!
I have published several articles about databases and their maintenance in SSDT project on the blog. One of the posts explained step by step how to import a database from a production environment and thus create the SSDT project.
In this video, I will explain to you how to set up build and release pipelines in Azure DevOps, so we could deploy and publish SSDT database project to our target SQL Server instance incrementally.
This is a very simple example only – how to do that with a minimum number of steps.
But in the next videos, I will show you how to extend it by adding more and more features.
Stay tuned!
About author
You might also like
SSDT: Scripting static data from SQL Server (video)
Out of the box, SSDT covers objects on a database level, like tables, stored procedures, views, functions, etc. That means schema only. In many cases, there is a need to
ADF and passwords with Azure Key Vault & set up GIT
Have you worked with ADF yet? Did you configure the GIT code repository to automatically upload all changes to having your own isolated branch during development? If not yet, in
ASF 013 video invitation
Does this video tell you anything about our next guest? The next episode is almost here! Stay tuned & have a good weekend!
10 Comments
Floatila
December 22, 06:26Could you please suggest how to add new seed data/lookup data into two tables which has foreign key relationship. These table already has some data. I want to add new lookup data to existing data.
Please suggest.
How to use Merge Into. Any other ways to setup lookup data
Kamil Nowinski
December 23, 22:46Generally speaking, if you have a reference data generated from SSDT – all IDs should be static (and identical between environments). That approach still leaves flexibility to manage the data in the table by users.
All other scenarios, although possible, making the cases complicated. Check out this post on my blog where I described how to script the data and generate MERGE statement:
https://azureplayer.net/2019/10/script-deploy-the-data-for-database-from-ssdt-project/
In a more sophisticated case – you still can modify the generated script by adding subselect as a lookup for the foreign key(s).
If I didn’t answer your question – please DM me and describe your scenario and reason for having dynamic values of FK.
Larry
April 21, 20:57These are great examples, however we’re working with an older application in production, not a new application. So adding a field is simple enough with an alter. I’m curious about two things. What about renaming or deleting a field what has data in it.
Also, what about rolling back a release. With regular code I would just re-deploy the last Azure DevOps release to restore back to our last good release. What if we’ve deployed the database release but something is wrong and we need to go back to last state. What happens, especially with the data if we had added new fields or tables in the new release?
Kamil Nowinski
May 03, 15:19Hi Larry, thanks for your comment here.
When you say “working with an older application in production” you meant having migration approach for deployment? If so, that’s also fine and works for many companies. It’s just another approach.
In terms of renaming columns – SSDT has a special mechanism which keeps these kinds of operation as refactoring and run them only once.
Every deployment (changes script) can wrap everything up on to transaction. Once something goes wrong, the whole transaction would be rollbacked.
Vikrant
June 05, 11:58Hi,
Thank you for very informative video. I followed all the processes mentioned in the video except to use Git and instead used Azure Repos. However, I am facing a peculiar issue as described below –
I imported AdventureWorksLT in my Visual Studio. I also installed this database in a Dev database in Azure SQL db instance. I connected SSDT to Azure Repo and then build both the build & release pipeline into a new empty QA database (on the same azure server). When I first run the pipeline, the AdventureworksLT database was copied in the QA database.
Next, I did some changes in AdventureworksLT database in visual studio, committed and pushed the changes. The build pipeline was created successfully and release pipeline was triggered. However, the release pipeline failed. Below is any analysis for the failure –
The new changes that I did in Visual studio were successfully copied to QA database. However, release pipeline failed as it is again trying to create all the tables and is failing as the tables are already present.
Ques – Why is the build creating a dacpac file which creates all the tables once again? Should it not create a dacpac file with just the delta changes?
I would be greatful if you could assist and guide what the issue is and/or if I am doing anything wrong.
Thanks
Vikrant
Kamil Nowinski
June 10, 21:26Hi Vikrant,
Thanks for your comment. It’s tough to help not seeing the whole project and log from Azure DevOps.
However, you said:
“1) The new changes that I did in Visual studio were successfully copied to QA database.”
“2) release pipeline failed as it is again trying to create all the tables”
My understanding is that somehow you did manage to deploy a database to QA, right? Which pipeline has done that?
Answering your question: DACPAC is a compiled version of your database project, thus should contains ALL the objects every time when you compile (build) database project in Visual Studio. This is your build (pipeline). The second step is to compare DACPAC (source database) against a target database (on server), generate differential script and execute it. This is your Release pipeline.
If you need more help – please create a ticket on Stackoverflow portal, add additional information such as log from Azure DevOps and screenshots would be also useful. Then put a new comment here with a link to the ticket.
Shubham
July 29, 14:45Hi,
Thanks for this video. Its very useful.
I have done all the steps and my build as well as release pipelines are running successfully.
Now what I am trying to do is, I want to do incremental deployment.
Like, If I change one SP/view only that should get deploy via release pipelines instead of dropping and creating all the objects again.
Is this possible?
Thanks,
Shubham
Kamil Nowinski
July 29, 16:42Hi Shubham. I’m glad that it works for you.
The incremental deployment script is being created by release pipeline which is using sqlpackage.exe behind the scenes. It works like that by default. Give it a try.
Rittik
September 07, 15:29Can you please guide on if I want to alter the schema , and want to deploy it without data loss, how I can do it?
Kamil Nowinski
October 16, 19:26Rittik, everything depends on a particular case. Could you provide more details or an example?