Vendor Lock In - an aspect that should always be considered at the beginning of a data science project.
Vendor Lock In is when you decided to use the platform and the tools from a specific vendor. When you do that you no longer have the freedom to simply switch vendors without investing huge amounts of work.
You may know the situation: At the beginning of a project, you have - perhaps a bit hastily - decided on a vendor. But now you realize that this was not the best choice for your project.
So you want to change to something else.
But that's a big problem: Both technically and financially, it is very difficult to do so, because nothing fits to your current setup.
Difficult switch from Google to AWS
For example: If you choose Google Cloud, you can't easily switch to AWS or anywhere else.
The Services from Google don't exist on AWS.
You are locked into Google, because otherwise you need to learn the AWS tools. You then need to setup your project on AWS and most likely code everything you did again.
But if you are using more general tools, it is easier to build it on another platform.
General Tools To The Rescue
Let's say you use open source tools like the 1000s of Apache projects.
If you realize, that your platform is not the right one, or if you find something else that is equally good but cheaper, you can just take your tools and your code. Set it up somewhere else and run everything on a different platform.
Exactly this Vendor Lock In is the reason why I love tools like Apache Kafka or Spark. They can be installed everywhere, on site, GCP, AWS or Azure.
It does not matter. You retain full flexibility without committing fully to a cloud or tool vendor.
Have you had any bad experience with Vendor Lock In?
Let's chat in the comments :)
>> created by Mira Roth
Check out the full video on YouTube