Acorel
To Acorel.nl

Migrating SAP Commerce to AWS Cloud

Nico Beers, 04 November 2020

Running an application such as SAP Commerce on an on-premise environment was very common in the earlier days. But business landscapes change and so do architectural requirements. This also applied on a project at one of our customers. In this article I will tell the story of our migration to the AWS Cloud. Why did we migrate and what challenges did we face?

Why did we migrate?

The main reasons for migrating are cost and efficiency. Running an on-premise system has its pros:

  1. With on-premise you have full control of the accessibility and your server
  2. Costs do not fluctuate without knowing it
  3. Full control over hardware

But also its cons:

  1. Requires a big investments up front
  2. Requires IT personnel
  3. Increases risk

When running a cloud based environment you eliminate most of the cons. You will have to pay a monthly fee that covers everything like backups, fallback systems, maintenance and connectivity. This will reduce cost on IT personnel, lower risk and eliminates the first great investment for servers and infrastructure.

As cloud environments have improved their management tools you have more control and flexibility over your environment. With just a few clicks you can change the specifications of your machine.

With this flexibility you can tweak your system to the best cost/efficiency rate. Which in our case is one of the most important benefits of using a cloud environment like AWS.

Our migration

We migrated one of our SAP Commerce projects to the AWS cloud from an existing on-premise environment. At first this doesn’t look difficult and should be possible to do in one day. But truth be told, this is a lot more difficult than you might expect. Our experiences with this migration prove that a good preparation is key to success. This is our story.

The plan

The first thing we did was to analyze the current environment and include everyone involved in the process. We answered the following questions:

  1. Get the scope of the migration
    • Is it a lift and shift or do we include more?
  2. What are the system requirements
    • How much CPU power, memory, disk space etc.
  3. Which connections to and from the application are there
    • How do users connect to the server?
    • How is the data of SAP Commerce exposed to external systems?
    • How are other systems connecting to SAP Commerce?
    • How are our monitoring tools connected?
  4. What connection adjustments do we need to do
  5. How are we going to migrate the data

We decided to do a “Lift and Shift” migration. This means that we changed as little as possible. As a next step, after the migration, we could further improve the environment to use to AWS cloud at its full power.

For the systems we decided to go for EC2 machines on AWS with specifications closest to the current environment.

We created a diagram for the connections of all other system that are connected to our application in some way. We involved our network engineers to monitor traffic to and from our application to complete the image. With all this information we planned how we could adjust these connections to make everything work.

And lastly we needed to think about the data. How could we migrate the data to the new environment without the loss of data. Because our application isn’t used after regular working hours, we accepted that we would have downtime during the migration. This gave us the freedom to copy all data without the need to worry about data being updated during migration.

Setup the test environment

Setting up the test environment went quite easy. The EC2 servers where up and running in no time. Solr was also quite easy. We copied our existing Solr directory to the new EC2 instance and ran the application. No configuration changes where required.

To run the SAP Commerce application on the new EC2 instances we had to execute the following steps

  1. Run the required database
  2. Adjust network connection when required
  3. Add a shared directory between all application servers
  4. Run the application itself on the EC2 instance

At first we had to get our database ready. We chose to run our application database on AWS RDS with an Oracle instance. This is closest to the existing database but then maintained by AWS. After setting up the RDS database we needed to transfer our data. At first we looked into “AWS Database Migration Service” (DMS) but due to connection limitations we couldn’t get this working. Therefore we decided to go for Export and Import. This results in downtime but this was no issue for us. The whole process of exporting the database, transfer to AWS and importing it took about 2 hours mainly due to the size of our database.

We knew that we would have some issues with some of our network connections. Therefore we needed to change these connections. As an example we had an issue with the connection to our SMTP server. From the AWS environment we could not connect to this server. To solve this issue we used the Cloud SMTP server from our company that is accessible from AWS.

In AWS we added an “Elastic File System” (EFS) and mounted this on all instances. This is our shared directory for our application.

To run the application we copied some configuration files to the new instance. From there we needed to modify our deployment tool to target the new EC2 instances. This tool builds the application and deploys it on the targeted instances. This tool reduces the amount of manual actions required for a deployment.

Test the environment and adjust where necessary

With the setup done we could start testing. At first we tested the application itself which ran just fine with good performance. Then we tested all incoming and outgoing connections. We tested all connections that we had in our schema and changed some network settings when required. But as you can expect, there are always unforeseen issues.

We had an issue with a static IP address through a NAT VPN. This seemed more difficult because we had a dependency on a third party. The network changes required to resolve our issue conflicted with another ongoing project. Unfortunately, we could not resolve this issue in a short timeframe, and we were forced to postpone the rest of the migration to the AWS Cloud. Aside from this connection issue everything went as planned and as soon as this issue is resolved, we will continue our migration.

Learnings

Throughout this project we had issues that we expected and resolved up front but also an unforeseen issue. We concluded that it is important to include everyone involved as soon as possible. This sounds obvious but it proved helpful in our case to solve some connection issues before migrating the application.

With the small scope, trying to lift and shift the application from on-premise to cloud, we were able to reduce risk and lead time of the migration. At first we wanted to include things like, change the monitoring and logging tool, include load balancing, automatically scale the amount of resources and so on. But with a small scope it is easier to pinpoint the source of an issue when it occurs and this will also improve the lead time of the migration.

Our key takeaway is that it is very important to have a full drawing of your current infrastructure. Not only what applications and servers are connected, but also how it is connected. Based on this drawing you should be able to tackle issues in your infrastructure up front. In our case we missed the “how” in some of the connections which resulted in an unforeseen issue.

Receive our weekly blog by email?
Subscribe here: