Tag Archives: VDI

Clones are not (writable) snapshots!

Everyone that has ever used server or desktop virtualization probably has used clones. Even though “clone” is not a well defined storage term, in most cases it is used to describe a data (image) copy. Technically this “copy” can be achieved using several technologies:  copy clones, snapshots based, or mirror based (BCV), etc. VMWare is using the term “Full clone” to describe a “copy clone”, while clones that use delta/copy on write (snapshot) mechanism are named linked clones. Some people treat clones only as linked clones and/or writable snapshots. Netapp has a feature called Flex-clone that is just a writable snapshot.

My view on this is that the term “clone” (as it is used in virtualization systems) should describe the use case and not the technology. Even though snapshots and clones may use the same underlying technology, their use cases and use patterns are not the same. For example, under many systems the snapshots’ source volume is more important to the user than its snapshots and has a preferred status over them (backup scenario). Technically, the source volume  is often full provisioned and has strict space accounting and manual removal policy while the snapshots are likely to be thin-provisioned (“space efficient”), may have automatic removal (expiration/exhaustion) policy, and soft/heuristic space management (see XIV for example).

This preferred source scheme will not work for clones;  in many cases the source of the clones is just a template that is never used by itself, so you can store it on much less powerful storage (tier), and once you finished generating the clones, you can delete it if you want. The outcome of the clone is much more important from the template, so if space runs out you may delete few old templates, but you wont remove the clones if they are in use – each of them is a standalone VM image.

This can be demonstrated by VMware’s linked clones that are implemented as writable snapshots on top of  a readonly base. When you generate linked clone pool using VMWare View Manager, the manager creates a readonly full  clone (the “replica”) and takes snapshots from it. This (clever) way hides the snapshot source and in most cases you don’t directly manage or use replicas. The base template has no role after the cloning ends and can be deleted.

Another major difference is the different creation patterns: snapshot creating events tend to be periodic (backup/data set separation scenarios), and cloning creation (at least for VDI use cases) tend to be bursty, meaning each time clones are created from a base (template/replica) many of them are created at once.

This means that if you build a source-snapshot(or clone) creation over time graph, a typical snapshot graph will be dense and long tree (see below) while the equivalent typical clone tree will be very shallow but with big span out factor (maybe it should be called clone bush 😉 ). The following diagrams depict such graphs:

Typical snapshots tree

Typical clones tree

Typical clones tree

Due to these differences, even though under the hood snapshots and (linked) clones may be implemented using the same technologies, it is bad and in fact they shouldn’t be implemented in the same way as many (if not most) implementation assumptions for snapshots are not valid for clones and vice versa!

A very good example for such assumption is the span out level – many snapshots are implemented as follows: the source has it own guaranteed space, and each snapshot has its own delta space. When a block in the source is modified, the old block is copied to the snapshot delta  spaces (copy on write). This common technique  is very efficient for (primary) source and (secondary) snapshot scheme but on the other side it also assumes that the span out level is low – because the modified block has to be copied to each snapshot delta space. Image what will happen if you have 1000 (snapshot based) clones created from the same source!

If we go back to the VMWare’s linked clone case, the read only replica is enabling VMWare to generate many writable snapshots on top of a single source. The original snapshot mechanism cannot do that!

To sum this post up I want to claim that:

  1. Clones (even linked) are not snapshots
  2. Most (even all) storage systems are not implementing the clone use case, but rather just the snapshot use case
  3. It is time that storage systems will implement clones
Advertisements

Leave a comment

Filed under VDI, Virtualization

Desktop virtualization (VDI): is it too complex?

I am following VDI technologies and solutions right from the days people started to talk about it (around 2003) and even participated in VDI technologies development in my days in Qumrant. After 8 years, I am reviewing the current VDI solutions, and I have one very clear observation: It is far too complex. With the complexity comes also high operations costs (OPEX) and expensive setups are required (== high CAPEX). I think that something very wrong happened with VDI along the way. Just to be clear, I am not criticizing a specific solution. I think that the dominate VDI architecture is just wrong, regardless of the vendor. As I see it VDI solutions are built like that:

Take a server virtualization technology, use it to run many desktops on each physical host, add a decent remoting protocol, multimedia acceleration (optionally also WAN acceleration),  desktop to user broker, user (login) portal and/or other access control, several provisioning mechanisms, several update/patch mechanisms, several image cleanup mechanisms, application virtualization, profile virtualization, application streaming, user data redirection, antivirus accelerator, a management console to manage pools, another one to manage applications, storage solution for the storage storms and network solution for the network storms. If I didn’t miss something critical (and I am sure I did), you have a VDI solution. Oops! I totally forgot the OS, the system utilities, and the applications (but they are old news ;-))…

The above seems to be a good base for another Carlin style gig (see Modern man) but it can’t be a good basis for a solid enterprise level solution.

I have many thoughts on why is this so, and what is the solution for it, but this has to wait to another post.

Leave a comment

Filed under VDI

SSD Dedup and VDI

I found this nice Symantec blog about the SSD+Dedup+VDI issues in the DCIG site. Basically I agree with its main claim that SSD+Dedup is a good match for VDI. On the other side, I think that the 3 potential “pitfalls” mentioned in the post are probably relevant for a naive storage system, and much less for an enterprise level disk array. Here is why (the blue parts are citations from the original post):

  • Write I/O performance to SSDs is not nearly as good as read I/Os. SSD read I/O performance is measured in microseconds. So while SSD write I/O performance is still faster than writes to hard disk drives (HDDs), writes to SSDs will not deliver nearly the same performance boost as read I/Os plus write I/O performance on SSDs is known to degrade over time.
This claim is true only for non enterprise level SSDs. Enterprise level SSDs write performance suffer much less from performance degradation and due its internal NVRAM, the write latency is as good as read latency, if not better. Furthermore most disk arrays have non trivial logic and enough resources to handle these issues even if the SSDs cannot.
  • SSDs are still 10x the cost of HDDs. Even with the benefits provided by deduplication an organization may still not be able to justify completely replacing HDDs with SSDs which leads to a third problem.
There is no doubt that SSDs are at least 10x more expensive than HDDs in terms of GB/$. But when comparing the complete solution cost the outcome is different. In many VDI systems the real main storage constrain is IOPS and not capacity. This means that a  HDD based solution  may need to over provision the system capacity and/or use small disks such that you will have enough (HDD) spindles to satisfy the IOPS requirements. In this case, the real game is IOP/$ where SSDs win big time. Together with the Dedup oriented space reduction, the total solution’s cost maybe very attractive.
  • Using deduplication can result in fragmentation. As new data is ingested and deduplicated, data is placed further and further apart. While fragmentation may not matter when all data is stored on SSDs, if HDDs are still used as part of the solution, this can result in reads taking longer to complete.

Basically I agree, but again the disk array logic may mitigate at least some of the problem. Of course 100% SSD solution is better (much better is some cases). but the problem is that such solutions are still very rare if at all.

Leave a comment

Filed under Enterprise Storage, ssd, Storage architectures, VDI, Virtualization