In an earlier post, I wrote about how I needed to create a squid proxy server to get access to the internet from a server in my IBM Cloud Classic private network. What I want to do in this post is dive a bit deeper into the Classic network architecture and how that compares to Virtual Private Cloud (VPC).
To recap, IBM Cloud Classic has separate private and public networks within a customer account. Servers that get deployed are placed into VLANs on the private network. They can optionally have VLANs for the public network connected to the public side. Having a public VLAN on the server means that the server also gets an internet routable IP address. From a security standpoint, security groups or a network perimeter firewall should be used as a layer of network protection from connections from the internet.

Typically, what I recommend to most clients is to not use the public VLAN on the servers. While it could be secured as mentioned above, I (and most security teams that I deal with) do not feel comfortable with an internet routable IP on the servers directly. If an admin makes a mistake such as removing the security group or removing a public VLAN from a firewall, that server becomes exposed.
My recommendation is to have the servers connected to the private VLANs only and have all traffic to or from the internet be NATted by the firewall which is connected to the public VLAN. More complex environments may have multiple firewalls.

Having separate networks allows for servers to be physically disconnected from the internet. This has its benefits from a security point of view but does take some time to set up if internet connectivity is needed since the cloud administrator would need to configure the firewall device for NAT. The firewall can also become a bottleneck depending on throughput requirements because it is deployed on dedicated hardware. More firewalls may be needed as the environment scales out.
There are a few use cases for why you would put the public VLAN directly on the server:
- The server is in a network DMZ
- An application does not work well with NAT
- Extra bandwidth for bandwidth pooling
The first two bullets are straightforward while the last one is more from a billing perspective. IBM Cloud Classic networking gives a specific amount of free internet egress allotment when servers are deployed with public interfaces. This can range from 250GB to 20TB per server. These allotments can be pooled to be shared by all the servers in the region. Many customers never get internet egress charges since their usage falls within the free allotment.
One of the main challenges with Classic networking as mentioned is getting it set up in the first place. For most customers with steady-state workloads, it is a one-time setup. For customers that are looking to build and tear down environments, some further configuration may often be needed. For example, if there is a new VLAN that is created in the cloud to isolate new servers for a specific project, the configuration needs to be added to the firewall to protect that VLAN.
Another challenge in Classic networking is that it automatically assigns IP subnets to the customer account, from the 10.0.0.0/8 address space. This does not work for most enterprise customers. The configuration is needed on the firewall to enable custom addresses, through the creation of an overlay network.
This is where VPC networking comes in. VPC allows customers to create their cloud environment on top of the IBM Cloud network. Where Classic networking is built using physical appliances, VPC uses logical components.
For example, if I were to deploy a Virtual Server Instance (VSI) in a VPC and needed to have outbound internet access, I would not have to deploy a physical firewall device to perform NAT like in Classic. I could activate the public gateway in the VPC to perform NAT. It can be activated in seconds at the click of a button or using automation tools such as Terraform. This is important because it allows customers to be more agile and set up environments quicker than they could with Classic.
There is also no restriction on private addresses that can be used; customers define the subnets that they want the servers to be provisioned on without workarounds such as using an overlay network like in Classic.
Overall, VPC is a significant improvement for cloud networking over Classic but when implementing a new deployment, how would you decide on which to use? If implementing a new deployment, I would recommend deploying in VPC. But that may not be possible. VPC in its current form does not have complete feature parity with Classic in the services it supports. As of this writing, VSIs work in both Classic and VPC. Bare Metal servers and VMware solutions sit in Classic only. The Kubernetes Service clusters can sit in both, but only by using VSI worker nodes in VPC. Eventually, I expect all these services to be available in VPC.
VPC is also still being deployed to all regions worldwide. Today it is targeted at Multi-Zone Regions (MZRs). These are the main cloud regions that have multiple Availability Zones (AZs) in a geographic region which means these regions get new services first. Single Zone Regions (1 AZ in a region) today are Classic only. So, depending on geographic or data residency requirements, deploying in an SZR with Classic may be a requirement.
For enterprises that start with a Classic environment and want to have new deployments in VPC, it is possible to connect Classic and VPC networks together using the Transit Gateway service. This would be a common pattern as new modernized workloads run in VPC on the Kubernetes Service, while still needing to access data and legacy applications running in Classic.
In a future post, I will show how to create a VPC and set up connectivity between it and a VMware cluster running in a Classic environment.