Articles of the VPS column:

Here we are at the first first stage of our guide dedicated to the choice of a Virtual Private Server cloud service: let’s discover all the secrets of cores and vCPU.

If you’re not satisfied with the offerings from Amazon, Google and Microsoft, all you need is a couple of Google searches to find out hundreds, even thousands, of different VPS (Virtual Private Server), public cloud and other IaaS (Infrastructure as a Service) services.

Prices are extremely heterogeneous and they spread from offerings lower than 20€/yr to several thousands of euros monthly. In most cases we’re dealing with a supply of Virtual Machines based most times on Linux, and sometime even on Windows. Even if we try to define some basics requisite (RAM, disk, cores and available bandwidth), offerings vary a lot in terms of price, even though services like backup or firewall are excluded. How is it possible? Are there some obscure, hidden differences that allow us to comprehend those differences? Or do we just pay the “brand”? What kind of supporting service is available? What is the availability of the service?

 Impossible comparison?

offertonaIt's not unusual to find big offers on online VPS. In this case it's less than 20$/yr, but the SSDs used are at desktop level..

It’s pretty hard to come to certain conclusions, the previously mentioned parameters and factors certainly concur to determine the final price of a VPS, what is hard to choose is the service which is suitable to guarantee an acceptable service level and an appropriate support. It’s not fair to simplify; for instance, it’s wrong to believe that giants like Microsoft, Google or Amazon are necessarily the best services available: they suffered -in the past- some more or less significant problems and their offerings -more and more hard to decipher- hidden pitfalls and pieces of information that are sometime not clear.

Another mistake is to rely -for other than testing purposes- on uncertain reputation providers that sell out their service or adopt overselling policies. Obviously the risk is to run into with particularly high black-out spans, high latency time and performances way lower than the expected. The worst case is the sudden, unexpected ending of the service.

As in the choice of every cloud service, a key point to keep in mind is migration time of the service and data export when transitioning to a different public solution or to an alternative between the walls.
You clearly don’t want to find yourself hostage of your own service provider.

In this first article we will start by dealing with the number of “cores”, the most used parameter in VPS selling to express computing power. In the next article we will explain how to evaluate offerings based on RAM and I/O resources (disks, SSDs and storage) that are at disposal. We’ll analyze other parameters to evaluate with care, like licences, typology of support and other optional services.

Cores, those unknowns

coreBe careful calculating the total number of cores when hyperthread is present: 2 vCPU equal to a single core!

The number of cores is certainly one of the most used parameters used in the selling of a VPS. Unfortunately it is a very generic term that can really mean all or nothing at all. On a hardware side, the computing power of a single core varies according to the processor: processors can significantly vary between the same Xeon generation, gaps are huge considering all of the Intel and AMD CPUs that can be found in in production servers.

Reality is even worse than expected. When we talk about cores of a VPS we don’t even talk about physical cores. First we need to consider the possible use of Intel’s Hyperthread technology (which is, essentially, the only manufacturer of server CPUs): a 6-core Xeon with hyperthread offers the Operative System 12 cores. In vSphere, for instance, to assign a VM all the cores of a single 6-core CPU, 12 vCPU, and not 6 cores, must be selected in the VM configuration settings. Therefore, a vCPU (virtual CPU) or a core used in virtual sphere equals to an half core of the physical CPU, not to the whole core.

If you then carefully read what is written in tiny characters on the contract that you have signed with your provider, you might discover that it’s not true at all that you have exclusive access to that half core, or vCPU. Core can actually be shared by assigning them to different VPS; on a user side it’s hard to have a proof of that on a performances level as long as the VPS with whom you share resources are fundamentally at idle or consume a few MHz. Even some medium/high tier providers don’t have a vCPU/core 1:1 ratio policy, but assign at least 2 VPS to each vCPU. Another widespread approach is to create some RAM and cores packages, so those who need lots of RAM have lots of cores and, conversely, who needs lots of cores must buy lots of RAM.

An original approach to computing power calculation was Amazon’s ECU. ECU stands for EC2 Compute Unite and it’s a value determined as the equivalent to a 1,7GHz 2006 Xeon. Fortunately by the summer of 2014 this unit has been abandoned and now Amazon too uses the term vCPU in the description of its VPS, or instances as the american giant describes them. For each typology of machine Amazon specifies what kind of processors is being used, to give a more precise definition of vCPU.

Expandable performances, slowed down VPS?

amazonThe "expandable" performances used by Amazon on t2 instances. It's a complex way to tell you that performances are reduced.

That doesn’t hold to all of the instances: t2 ones, the most used due to the appealing price, adopt a sophisticated CPU assign method called “expandable performances”. In essence, these VMs are not assigned a whole vCPU, but a percentage of it, which is 10% for t2.micro, 20% for t2.small, 40% for t2.medium and 60% for t2.large. The numbers are referred to a single core, for example if we talk about t2.medium that has 2 cores it means 20% of each core, or 40% of a single core on single threaded applications.

The aforementioned percentage is not fixed but tied to a number of credits that is assigned hourly to the VM. If the CPU is not being used it can collect credits to use to exceed the cited limits. Essentially the mechanism works only for loads limited in time, like unused Web server. If the CPU remains under load, the VM is constantly binded to the cited percentages, if it’s unloaded it can reach 100% of the assigned cores.

CPU without Turbo

It is not enough to declare the utilized CPU to have a precise data. According to some tests available on the Net about Azure’s A9 machine (which cost 3.000€/mo for the Windows version, 2.800€/mo for the Linux one), for example, Microsoft uses a Xeon E5-2670 Sandybridge-EP but disable Turbo Boost to save on energy cost, losing almost 40% on performances of single thread applications. That’s a big difference even considering that is a precise model of a processor! An interesting document that shows the details on processors and frequencies used by Azure VM can be found at this address (http://blogs.technet.com/b/stephw/archive/2015/06/01/details-of-the-azure-processors-for-vm-sizes.aspx) at the TechNet.

Take care of specifications

We just did a few examples citing some of the most famous cases, obviously we invite you to evaluate case by case the offerings of your cloud providers, trying to have access to details on how physical CPUs are effectively used on the machines that will host your applications. This is the only way to differentiate between apples and oranges and make a rational purchase based on your needs, perhaps saving money. An evaluation that must not be forgotten is the on-premises/housing comparison. The price of a VPS includes electricity, UPS systems and all the setup maintenance and management expenses. But when your calculations add up to the cited amounts (some thousands of euros monthly), you might find that putting your machines in a datacenter or even inside your own walls is the most convenient option, even though it’s more demanding and with less guaranteed results.
In spite of the “cloud” trend.

Next article: VPS (Second part): The SSD trick

About the Author

Filippo Moriggia

After more than 10 years of experience in the technical journalism with PC Professionale (the italian version of PC Magazine) and other newspapers of Mondadori group, Filippo Moriggia founded GURU advisor, the reference website for IT professionals, system integrators, cloud providers and MSPs. He has a Master of Science in Telecommunications Engineering and works as a independent consultant and contractor for different firms. His main focuses are software, virtualization, servers, cloud, networking and security. He's certified VMware VCA for Data Center Virtualization.

banner eng

fb icon evo twitter icon evo

Word of the Day

The term Edge Computing refers, when used in the cloud-based infrastructure sphere, the set of devices and technologies that allows...

>

The acronym SoC (System on Chip) describes particular integrated circuit that contain a whole system inside a single physical chip:...

>

The acronym PtP (Point-to-Point) indicates point-to-point radio links realized with wireless technologies. Differently, PtMP links connects a single source to...

>

Hold Down Timer is a technique used by network routers. When a node receives notification that another router is offline...

>

In the field of Information Technology, the term piggybacking refers to situations where an unauthorized third party gains access to...

>
Read also the others...

Download of the Day

Netcat

Netcat is a command line tool that can be used in both Linux and Windows environments, capable of...

>

Fiddler

Fiddler is a proxy server that can run locally to allow application debugging and control of data in...

>

Adapter Watch

Adapter Watch is a tool that shows a complete and detailed report about network cards. Download it here.

>

DNS DataView

DNS DataView is a graphical-interface software to perform DNS lookup queries from your PC using system-defined DNS, or...

>

SolarWinds Traceroute NG

SolarWinds Traceroute NG is a command line tool to perform advanced traceroute in Windows environment, compared to the...

>
All Download...

Issues Archive

  •  GURU advisor: issue 21 - May 2019

    GURU advisor: issue 21 - May 2019

  • GURU advisor: issue 20 - December 2018

    GURU advisor: issue 20 - December 2018

  • GURU advisor: issue 19 - July 2018

    GURU advisor: issue 19 - July 2018

  • GURU advisor: issue 18 - April 2018

    GURU advisor: issue 18 - April 2018

  • GURU advisor: issue 17 - January 2018

    GURU advisor: issue 17 - January 2018

  • GURU advisor: issue 16 - october 2017

    GURU advisor: issue 16 - october 2017

  • GURU advisor: issue 15 - July 2017

    GURU advisor: issue 15 - July 2017

  • GURU advisor: issue 14 - May 2017

    GURU advisor: issue 14 - May 2017

  • 1
  • 2
  • 3
  • BYOD: your devices for your firm

    The quick evolution of informatics and technologies, together with the crisis that mined financial mines, has brought to a tendency inversion: users that prefer to work with their own devices as they’re often more advanced and modern than those the companies would provide. Read More
  • A switch for datacenters: Quanta LB4M

    You don’t always have to invest thousands of euros to build an enterprise-level networking: here’s our test of the Quanta LB4M switch Read More
  • Mobile World Congress in Barcelona

    GURU advisor will be at the Mobile World Congress in Barcelona from February 22nd to 25th 2016!

    MWC is one of the biggest conventions about the worldwide mobile market, we'll be present for the whole event and we'll keep you posted with news and previews from the congress.

    Read More
  • 1