Deployment of Microsoft SQL database with Azure DevOps

Deployment of Microsoft SQL database with Azure DevOps

Hello, data folks!
I have published several articles about databases and their maintenance in SSDT project on the blog. One of the posts explained step by step how to import a database from a production environment and thus create the SSDT project.
In this video, I will explain to you how to set up build and release pipelines in Azure DevOps, so we could deploy and publish SSDT database project to our target SQL Server instance incrementally.
This is a very simple example only – how to do that with a minimum number of steps.
But in the next videos, I will show you how to extend it by adding more and more features.

Stay tuned!

Previous Using SQL Server on Docker
Next Last Week Reading (2019-12-22)

About author

Kamil Nowinski
Kamil Nowinski 200 posts

Blogger, speaker. Data Platform MVP, MCSE. Senior Data Engineer & data geek. Member of Data Community Poland, co-organizer of SQLDay, Happy husband & father.

View all posts by this author →

You might also like

T-SQL Tuesday 7 Comments

#TSQL2sDay – How SQLPackage.exe can spoil your deployment

Today, I’d like to describe briefly one thing with SQLPackage.exe. I know… It’s not precisely related to PowerShell (this week topic), but… you can use PowerShell to run it (and

ASF 025 video invitation

It’s time to announce and disclose our next guest of the podcast “Ask SQL Family”. Ladies and gents, please to meet Rob Farley, who has come to us from Australia.

Publish ADF from code to further environments

Struggling with #ADF deployment? adf_publish branch doesn’t suit your purposes? Don’t have skills with PowerShell? I have good news for you. There is a new tool in the market. It’s a task for Azure

10 Comments

  1. Floatila
    December 22, 06:26 Reply

    Could you please suggest how to add new seed data/lookup data into two tables which has foreign key relationship. These table already has some data. I want to add new lookup data to existing data.
    Please suggest.
    How to use Merge Into. Any other ways to setup lookup data

    • Kamil Nowinski
      December 23, 22:46 Reply

      Generally speaking, if you have a reference data generated from SSDT – all IDs should be static (and identical between environments). That approach still leaves flexibility to manage the data in the table by users.
      All other scenarios, although possible, making the cases complicated. Check out this post on my blog where I described how to script the data and generate MERGE statement:
      https://azureplayer.net/2019/10/script-deploy-the-data-for-database-from-ssdt-project/
      In a more sophisticated case – you still can modify the generated script by adding subselect as a lookup for the foreign key(s).
      If I didn’t answer your question – please DM me and describe your scenario and reason for having dynamic values of FK.

  2. Larry
    April 21, 20:57 Reply

    These are great examples, however we’re working with an older application in production, not a new application. So adding a field is simple enough with an alter. I’m curious about two things. What about renaming or deleting a field what has data in it.

    Also, what about rolling back a release. With regular code I would just re-deploy the last Azure DevOps release to restore back to our last good release. What if we’ve deployed the database release but something is wrong and we need to go back to last state. What happens, especially with the data if we had added new fields or tables in the new release?

    • Kamil Nowinski
      May 03, 15:19 Reply

      Hi Larry, thanks for your comment here.
      When you say “working with an older application in production” you meant having migration approach for deployment? If so, that’s also fine and works for many companies. It’s just another approach.
      In terms of renaming columns – SSDT has a special mechanism which keeps these kinds of operation as refactoring and run them only once.
      Every deployment (changes script) can wrap everything up on to transaction. Once something goes wrong, the whole transaction would be rollbacked.

  3. Vikrant
    June 05, 11:58 Reply

    Hi,

    Thank you for very informative video. I followed all the processes mentioned in the video except to use Git and instead used Azure Repos. However, I am facing a peculiar issue as described below –

    I imported AdventureWorksLT in my Visual Studio. I also installed this database in a Dev database in Azure SQL db instance. I connected SSDT to Azure Repo and then build both the build & release pipeline into a new empty QA database (on the same azure server). When I first run the pipeline, the AdventureworksLT database was copied in the QA database.

    Next, I did some changes in AdventureworksLT database in visual studio, committed and pushed the changes. The build pipeline was created successfully and release pipeline was triggered. However, the release pipeline failed. Below is any analysis for the failure –

    The new changes that I did in Visual studio were successfully copied to QA database. However, release pipeline failed as it is again trying to create all the tables and is failing as the tables are already present.

    Ques – Why is the build creating a dacpac file which creates all the tables once again? Should it not create a dacpac file with just the delta changes?

    I would be greatful if you could assist and guide what the issue is and/or if I am doing anything wrong.

    Thanks
    Vikrant

  4. Kamil Nowinski
    June 10, 21:26 Reply

    Hi Vikrant,
    Thanks for your comment. It’s tough to help not seeing the whole project and log from Azure DevOps.
    However, you said:
    “1) The new changes that I did in Visual studio were successfully copied to QA database.”
    “2) release pipeline failed as it is again trying to create all the tables”
    My understanding is that somehow you did manage to deploy a database to QA, right? Which pipeline has done that?
    Answering your question: DACPAC is a compiled version of your database project, thus should contains ALL the objects every time when you compile (build) database project in Visual Studio. This is your build (pipeline). The second step is to compare DACPAC (source database) against a target database (on server), generate differential script and execute it. This is your Release pipeline.
    If you need more help – please create a ticket on Stackoverflow portal, add additional information such as log from Azure DevOps and screenshots would be also useful. Then put a new comment here with a link to the ticket.

  5. Shubham
    July 29, 14:45 Reply

    Hi,

    Thanks for this video. Its very useful.
    I have done all the steps and my build as well as release pipelines are running successfully.
    Now what I am trying to do is, I want to do incremental deployment.
    Like, If I change one SP/view only that should get deploy via release pipelines instead of dropping and creating all the objects again.
    Is this possible?

    Thanks,
    Shubham

    • Kamil Nowinski
      July 29, 16:42 Reply

      Hi Shubham. I’m glad that it works for you.
      The incremental deployment script is being created by release pipeline which is using sqlpackage.exe behind the scenes. It works like that by default. Give it a try.

  6. Rittik
    September 07, 15:29 Reply

    Can you please guide on if I want to alter the schema , and want to deploy it without data loss, how I can do it?

    • Kamil Nowinski
      October 16, 19:26 Reply

      Rittik, everything depends on a particular case. Could you provide more details or an example?

Leave a Reply

Click here to cancel reply.