Migration Guides

SQL Server to PostgreSQL Migration Guide

Explore our SQL Server to PostgreSQL migration guide. Understand the process, learn from our expertise, and discover practical tips to facilitate your migration

Cal Mitchell
Last Updated: Dec 13, 2023
SQL Server to Postgres migration guide

Table of Contents

Babelfish

The first step is deciding whether or not to use Babelfish. In case you're unaware, Babelfish is a PostgreSQL extension from Amazon that allows it to understand queries written in SQL Server syntax.

Babelfish pros and cons

Using Babelfish may seem like an easy choice, but there are many factors to consider. In our opinion, the most important pros and cons of Babelfish are:

Pros

  • Drastically reduces the amount of work that needs to be done to translate stored procedures, client queries, column types, etc.
  • Babelfish, like PostgreSQL, is open-source. There is no vendor lock-in.

Cons

  • Babelfish does not support 100% of SQL Server's features or syntax, so you must find workarounds.
  • As far as we know, Amazon Aurora is the only managed database that offers the Babelfish feature, so you will have to use Aurora, or run Babelfish yourself.

If your organization is on AWS and has historically used SQL Server as your main relational database, then it may make sense to investigate Babelfish. However, if your team has lots of PostgreSQL experience and you only have a few SQL Server databases, then it may be worth it to homogenize your data stack on PostgreSQL. In the end, the choice is really up to you.

This guide assumes you have decided to use plain PostgreSQL, without Babelfish. We also also have a SQL Server to Babelfish Migration Guide.

Preparation / decision making

The first part of the process is high level research. You should identify MSSQL features that are not supported by PostgreSQL and find parts of your application that don't need to be migrated.

The point of this exercise is to make a yes/no decision on whether this project is worth your time, energy, and money.

Identify non-compatible features

Most non-trivial applications make use of some SQL Server feature that isn't available in PostgreSQL. Common examples of these features include:

  • SSIS
  • Memory optimized tables
  • Custom file groups
  • Columnstore indexes
  • Clustered indexes
  • PostgreSQL has 6 decimal points of datetime precision, while MSSQL has 7

Also, watch out for table and column names that conflict with PostgreSQL's reserved keywords!

We have not yet encountered a case where PostgreSQL couldn't replace MSSQL, but there are definitely use cases that require some of these features. For example, if your application requires low latency OLAP queries on large amounts of data, you will miss columnstore indexes. Thankfully, PostgreSQL's open-source nature solves many of these problems, as there is usually an extension for any common use case.

Give your app a haircut

There are almost always schemas, tables, and sometimes even entire databases that are no longer in use. For example, a recent client of ours claimed they needed to migrate 25 databases, but upon further investigation, less than 20 were actually used in production.

We use a few ad-hoc SQL scripts to determine how often database objects are actually used - stored procedures, tables, etc. It is essential to do that homework ahead of time so you don't waste effort moving things that aren't needed.

It is especially helpful to identify stored procedures that are not in use anymore - translating and testing complex stored procedures is probably the most time consuming part of a schema translation process.

Here is one of the many SQL scripts we use to discover stored procedure usage, credit to Thomas Stringer on DBA Stack Exchange:

select
  db_name(st.dbid) as database_name,
  object_name(st.objectid) as name,
  p.size_in_bytes / 1024 as size_in_kb,
  p.usecounts,
  st.text
from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where p.objtype = 'proc'
and st.dbid = db_id('SomeDatabase')
order by p.usecounts desc;

Keep in mind that this script only tells you how many times an SP has been executed since the last time it was compiled. Info about how often that happens is in Microsoft's Execution Plan Caching and Reuse page, although that particular article is slightly dated.

Schema translation

Translating a schema from MSSQL to PostgreSQL isn't that hard. In fact, there are automated services (such as Amazon DMS) that can help you with the process.

The advent of advanced AI systems has also drastically lessened the effort required to translate SQL from one syntax to another. It is truly amazing what the latest LLMs can do. We make use of them all the time, and are always testing new LLMs to see which ones are the most accurate for our use case.

Side note: We never train LLMs on customer DDL code, or enter customer DDL code into LLMs (ChatGPT) that will train on it.

However, translating a schema well is still a labor intensive, detail oriented process. The cost of making a mistake in your database can be very high. In all cases we've seen, there are hard decisions to be made and trade offs to be considered.

At the end of the day, you need to export your schema's DDL code via SSMS and translate it to PostgreSQL syntax. The primary deliverable is a working SQL file that creates a schema which is compatible with the original database. The primary objects you need to convert are:

  • Databases
  • Schemas
  • Tables
  • Views
  • Columns
  • Constraints
  • Indexe
  • ‍Stored procedures
  • Functions
  • Triggers
  • Sequences
  • Custom types

Ideally, someone with professional experience working with both SQL Server and PostgreSQL will build the new SQL file, potentially with the help of automated tools. Each RDBMS has it's own unique way of doing things, and even if you make something that "works", it may not be optimal.

Our process for translating these objects is simple:

  • Spin up a dev environment (usually a single VM with 16 GB RAM in the customer's VPC).
  • Put a backup file of MSSQL on the dev instance, as well as a DDL file exported by SSMS.
  • Deploy MSSQL and PostgreSQL via Docker Compose.
  • Create a SQL file that you will work on to create a compatible schema for PostgreSQL.

Now it's time to work through the MSSQL DDL file, translating objects in the following order:

  • Schemas
  • (Optional) types / sequences
  • Tables / columns (without constraints)
  • Views
  • Indexes
  • Constraints
  • Stored procedures
  • Functions
  • Triggers

The process isn't complex, you just have to do the work. It is a long and detail-oriented process, so be sure to constantly re-compile the PostgreSQL database with your in-progress DDL file to ensure that you haven't made any mistakes.

Sample Docker Compose file

Here is a Docker Compose file that will spin up a PostgreSQL and MSSQL instance, as well as transfer SQL files from your local machine to the Docker images for initial DB setup. We recommend running the DDL file from SSMS on SQL Server startup (this would correspond to MSSQL-SETUP-FILE.sql in the example below) and your working DDL file for PostgreSQL (POSTGRESQL-SETUP-FILE.sql).

This way, if either of the databases get into a weird state because of testing you're doing on them, you can just blow away the entire database and start from scratch.

version: '3.8'
services:
mssql:
  image: mcr.microsoft.com/mssql/server
  container_name: mssql
  environment:
    ACCEPT_EULA: Y
    SA_PASSWORD: YourPass456
  platform: linux/amd64
  volumes:
    - ./MSSQL-SETUP-FILE.sql:/setup.sql
  command: >
    /bin/bash -c "
      /opt/mssql/bin/sqlservr &
      sleep 30;
      /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'YourPass456' -d master -i /setup.sql
      tail -f /dev/null "
  ports:
    - "1433:1433"
  deploy:
    resources:
      limits:
        memory: 8G
postgresql:
  image: postgres
  container_name: postgresql
  ports:
    - 5432:5432
  environment:
    POSTGRES_PASSWORD: YourPass456
  volumes:
    - ./POSTGRESQL-SETUP-FILE.sql:/docker-entrypoint-initdb.d/setup.sql
  deploy:
    resources:
      limits:
        memory: 4G

SQL Server to PostgreSQL type mappings

While translating tables, you will need to translate columns from MSSQL types to PostgreSQL types. Please view our article SQL Server to PostgreSQL Data Type Mappings for an in depth guide on how to translate your column types.

Data Movement

We also a free SQL Server to PostgreSQL data migration tool that can migrate all of the user data in a SQL Server database to a PostgreSQL database with just a few clicks. Visit our database migration products page for more information.

We also offer a paid tool that allows you to pull the initial load of a database from a SQL Server snapshot, then replicate data changes to PostgreSQL in near real time. This allows you to lessen the impact on your production systems and minimize downtime.

Why not use native tools?

The reason we recommend our own tools over native solutions is the relative lack of compatibility between SQL Server and PostgreSQL's CSV export / import tools.

SQL Server's import / export client, BCP, has great support for customizing row and column delimiters - it even allows multicharacter delimiters. However, it doesn't support creating RFC 4180 compliant CSVs because it doesn't support optionally enclosing fields (such as with double quotes) if it has a delimiter in it.

This is a major bummer, because PSQL does not support multicharacter delimiters - it only supports optionally enclosing them, the spec prescribed by RFC 4180 is a great example of this pattern.

This incompatibility is a crucial disconnect that can only be overcome by creating a custom translation script. It actually isn't that hard to write a script to translate a CSV from one format to another - in fact, that is basically what SQLpipe and Albatross does under the hood. However, you're here to migrate a database, not get into the weeds of CSV parsing details.

Final Changeover

The final changeover is when you finally point all of your applications (or point load at new, modified applications) at your new database. This is an extremely high leverage moment that requires the utmost preparation and care.

You should have rehearsed this changeover process at least once in a staging or dev environment and must make a contingency plan to roll back to your original environment in case some unacceptable errors appear once the new tech stack goes live.

It is impossible to give specific advice for this stage without detailed knowledge of your given environment. Every organization's applications, deployment methods, and downtime tolerance is different. We offer consulting services that can help you prepare for this part of the process.

Wrapping up

We hope this article helped you understand how to migrate from SQL Server to PostgreSQL at a high level. If you would like help with such a project, please send us a message on our contact page, we would be happy to assist!

Need help migrating?

We are here to help you at every step of your migration project.

Reach Out
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.