Hi,

so in case you didn’t hear the news, we have just released a full range of products

 

you can watch a replay of the launch here:

https://community.emc.com/community/events/live_events/theater_41

 

so where are we with the product release?

well, we have just entered a phase called Direct Availability (DA) which means that we can now quote and sell the product for specific use cases in specific regions across the world, this is needed in order to align the large EMC services organization behind us, it’s one thing for a small company to potentially introduce a bug to a customer environment, it’s another when the large EMC is behind you so quality is above everything else at this stage.

Think you have the right use case (VDI, Server Virtualization and DB) ? let your EMC rep know and they will contact us.

one of the most common questions I get when visiting customer sites is

“if flash changes the dynamic of the array, can I just use my existing legacy array and just populate it with drives, after all it’s considered as an all flash array, right?”

Yes and NO!

yes, because if you are using just FLASH drives it tick a checklist box called “ALL FLASH ARRAY” but FLASH is just the foundation for the advanced software that runs on the top of it, so let’s compare us to a company that is known to be a legacy vendor that thought that by putting FLASH drives it will tick the boxes of customers, unfortunately for them, customers tend to be very clever people, heck, I’m learning every day from visiting customers!

 

as you can see, they don’t have what we consider as “DATA Services” which I argue are a must when you design an enterprise array.

OK, but let’s be fair, what about the other startup companies out there that have designed an AFA, surely you all look the same, right?

NOPE

 

lets’ examine the leading company that is out there, they don’t scale out and because they dont , they don’t have a linear scaling (dah!), their performance is not predictable (just try to fill their array and see what happens to their array..

why you may wonder, a lot of it has to do with other startups garbage collection process that kicks in when the array become full, an ssd cell cant be just re-written, it have to be erased first and when your array becomes full, the garbage collection process try to do it’s best to keep up, this results in a VERY unpredicted performance and I encourage you to not take for granted what I say and test it for yourself!

inline data deduplication is inherit to our core architecture as oppose to a 3rd party addon that guess what, going to slow them down, in our case, it’s quite the opposite, making us FASTER!!

take a look at what happen to our array with data deduplication, the better dedupe ratio you get, the better performance you will get from us, kinda crazy,

I know, because on legacy array, deduplication is a post process that require you to have the capacity and also kills your engines when it’s doing it’s post process task but we are talking about AFA startups right, so let’s not drift, if you are using a 3rd party to provide dedupe, you CORE architecture is flowed and it will result in a performance degradation, again, don’t believe me, test it for yourself!

a lot of more important things to write in the future, until next time, your truly.

1 Comment »

Leave a ReplyCancel reply