heartbleed

Heartbleed-ing Your Way into Better Password Management

The Internet responded to Heartbleed. They even have stickers.*

But if you’re reading this, you’re still mulling it all over. I know I am

So let’s cut the subtly and even the low-level tech conversation. Let’s talk about why you need to act and what you need to do to act right now. 

Step 1: What Heartbleed Means To You

Here’s how I understand it:

Assume every single website you’ve ever logged into can be logged into, as you, by someone else.

If that does not scare you a little, read it again.

Ever single place you have an account on the Internet could be logged into by a total stranger without your password.

The discovery of Heartbleed shows that a fundamental building block of Internet security has not been secure for a while. It’s identify theft to the max.

It doesn’t mean your assets are in danger. Every respectable website that manages your money pays attention to your IP address and access patterns.

Step 2: Manage Your Passwords with LastPass

Let’s look at the bright side. Your password strategy sucked already. You use the same password everywhere or you forget it once a week and have to reset.

Maybe Heartbleed is a fresh start for you.

Do yourself a huge favor as you start fresh in the right direction: use LastPass to manage your password.

The software is simple and secure. No further thought be needed here. It installs per web browser you use (like Chrome, Firefox, Safari) all from the same location:

From this point forward, you have just one password you HAVE to remember. Remember your password to LastPass.

Since we’re doing this right, make it a passphrase, like XKCD explained so well. This guy made a generator for you too.

In all honesty, you could still keep your passwords memorable through a password theme. It makes it easy to remember each one you write by having a certain website-dependent structure. For example, “[website]77Wahoo!!” could be your format. You would use:

  • Facebook77Wahoo!! on Facebook.com
  • Twitter77Wahoo!! on Twitter.com
  • Google77Wahoo!! on Google.com
  • etc

Side note for those interested: I ran LastPass side-by-side with Password1 and found Password1 didn’t keep up. There are two cases that killed it:

  1. Password updates: Password1 could create duplicate entries on update and required manual intervention to fix it. LastPass has a beautiful auto-update feature.
  2. Form Filling: LastPass is a ninja on filling out forms. I haven’t written out my home address on a website since using it. Password1 supposedly has a form filling functionality, but it didn’t fill out all forms smoothly, nor handled drop downs as seamlessly as LastPass.
  3. Bonus: LastPass is free. And it’s better. And also free.

 Step 3: Change Your Passwords After You Get The Email

First you need “the email” from your vendor, like this one:

IFTTT-heartbleed-notification

IFTTT is telling you they are no longer vulnerable. That’s a green light to update your password.

If you update your password before the vulnerability is fixed, you just have a new password that’s easily bypassed through Heartbleed.

So wait for the email.

So you’ve waited for the email. You have LastPass on your favorite browser(s).

Good. Now you have to deal with Heartbleed. Now go change all your passwords.  Use LastPass to save them securely. You can even choose to auto-login on sites like this:

Auto-Login is awesome

What sites do you really have to change?

Mashable put together a list.  If you own a site, give this a read. My rule is if it would ruin your day for someone else to log into the website, change the password.

Conclusion: Is The Internet Still Safe?  

I think of the honest people on the Internet as a pack of gazelle.

Gazelle-Pack

The more noticeable you are – public figure or due to personal assets – the further you are from the center of the pack. The less noticeable you are on the internet, the closer you are to the center.

Now the other factor.

The more up-to-date you are on software updates, the more often you change your password, the bigger, faster and stronger you are. The less secure your practices, the smaller and weaker you are. 

Long metaphor short, don’t be this guy.

You on the Internet.

Will I still continue to bank, buy and build everything online?

Of course.

 

* Feel free to send me a sticker if this post is helpful!

 

multiple_paths

Technical Short: What’s with iSCSI Port Binding?

I’m learning much more about virtual networking in VMware as I work with customers as a Sales Engineer.

One checkbox I have to pay close attention to right now is called iSCSI port binding.

I love this image, compliments of Chad back in 2009.

I love this image, compliments of Chad back in 2009.

 

First – let’s define it from VMware’s very own language in KB 2038869:

Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet to allow multiple paths to an iSCSI array that broadcasts a single IP address.

If you’re anything like me, you’ve noticed the checkbox for iSCSI port binding and simply ignored it.

As an SE for Infinio, I now need to verify that customers do not have iSCSI port binding enabled on the vmkernel interface they’re using for NFS traffic.

Why does a NFS-only server-side caching solution care about iSCSI port binding?

I had to find out.

Here’s what I understand so far: enabling port binding bypasses some significant vSwitch functionality. With it enabled, the vmkernel interface takes the pNIC associated with it. No vSwitch logic, which would cut Infinio out of the data path.

No data path, no acceleration.

To be honest, I still don’t understand exactly how  port binding jumps in the way. I think of it like a raw device mapping for pNICs.

The team at Infinio has tested and found – even with Promiscuous Mode enabled on the vSwitch – you cannot sniff traffic going over the pNIC taken by port binding.

Technical details admittedly unknown, VMware is very clear about what’s import to keep in mind in the case where iSCSI port binding should be used (from the same KB as above):

When using port binding, you must remember that:

  • Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.

  • All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.

  • All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.

one-does-not-simply-iscsi-port-binding

My simple understanding of the matter comes down to this:

  • If you’re not using multiple physical NICs for iSCSI multipathing, there’s no reason to enabled iSCSI port binding
  • If you are using multiple pNICs for iSCSI traffic, have your vmkernel interface for NFS traffic on a separate pNIC

For those more curious on how to configure port binding, Brian Tobia goes over how to setup iSCSI Port Binding on vPrimerBuild Virtual also has a tutorial, which including the CLI commands.

wire-hell_2

Technical Short: The Complication that is VMkernel Multi-homing

I ran into a strange NFS permissions error in my work lab that kept me busy for a while. Here’s what I learned from it:

There is a configuration that VMware ESXi allows, but is reasonably well documented as a no no. The technical term for it is multi-homing (KB 2010877).

It begins when someone don’t follow this statement:

Storage networking should always be in a dedicated subnet associated with a non-routable VLAN or a dedicated physical switch.

As someone who is a little rough around the edges on networking, this one got me at first. Here’s the crux:

For example, if you have VMkernel ports configured like this:

  • One VMkernel port for vMotion, named vmk0
  • Another VMkernel port for iSCSI, named vmk1

If both of these vmknics are configured to be on the same IP subnet, the VMkernel TCP/IP stack chooses one of the two interfaces for all VMkernel traffic (vMotion and NFS) going out on that subnet.

Thankfully I came across Mike Da Costa’s great write up on the topic that walks you through exactly what you can expect when you create a multi-homing configuration. These best practices, laid out by Mike, are stuck in my mind:

  1. Have only one VMkernel port per IP subnet (the only exception here is for iSCSI multi-pathing, or multi-NIC vMotion in vSphere 5.x).

  2. A dedicated non-routable VLAN or dedicated physical switch for vMotion purposes.

  3. A dedicated non-routable VLAN or dedicated physical switch for IP Storage purposes.

  4. A dedicated non-routable VLAN or dedicated physical switch for Fault Tolerance purposes.

What I take away from this experience is that VMkernel interface subnet isolation is a good first assumption to make when whipping up a design, even in the lab.

May this post save you a few minutes of your time and also inspire you to pick up Chris Wahl and Steve Pantol’s new book like I am.

[Update: 9:50am on April 8th]

I had a great response from Scott (S.) Lowe on Twitter, expanding upon the conversation started here:

vmware-twitter-scott_lowe

 

These updates are well documented in the new vSphere Networking Guide from VMware.

[//End update]

Apples-and-Oranges

The Many Differences Between Server-side Caching and Other Solutions

There are two immediate reactions to introducing someone to server-side caching: excitement and the need to compare it to something more familiar.

The excitement comes from the clear logic involved: bringing data closer to the application is a no brainer. Doing so at a the right price is also a no brainer.

The comparisons come in anywhere across the map.

I’ve discussed the difference between server-side caching (not Infinio specifically) and all of the following:

  • Converged infrastructure plays like Nutanix
  • All-flash arrays like Pure
  • Server SAN solutions like VMware VSAN
  • Storage-side cache solutions like VNX Flash Cache
  • Software-defined storage like Atlantis USX
  • Scale-out storage like Coho Data

storage-solutions-and-their-associated-requirements

Customers of potential server-side caching solutions are always working from a point of reference. Their angle often tends to be storage-centric, since storage performance traditionally requires resources at the storage tier.

Here’s how I understand each of the categories above. I intentionally focus on the value to the customer, as opposed to dissecting the architectures.

Server-side caching vs Converged Infrastructure

  • Converged infrastructure is an alternative to existing centralize storage devices
  • Server-side caching aims to offload from existing storage devices
  • Converged infrastructure includes a new instance of the hypervisor
  • Server-side caching integrates with existing hypervisors
  • Server-side caching could potentially benefit a converged infrastructure architecture, but is unlikely to be necessary

Server-side caching vs All-flash Arrays

  • All-flash arrays are high performance devices that are separate from existing storage systems
  • Server-side caching offloads I/O requests from your existing storage systems at a fraction of the cost
  • Server-side caching could potentially benefit a all-flash array, but is unlikely to be necessary

Server SAN vs Server-side caching 

  • Server SAN is a separate storage target from your existing storage systems
  • Server-side caching  offloads I/O requests from your existing storage systems
  • Server-side caching could potentially benefit a server SAN infrastructure architecture

Storage-side caching vs Server-side caching

  • Storage-side caching promotes “warm” data to a flash-based cache within a storage system
  • Storage-side caching requires further resources (CPU utilization and internal bandwidth) of a storage system
  • Server-side caching promotes “warm” data to a space within the server without sending requests over the storage network
  • Server-side caching requires resources from the server tier
  • Server-side caching could potentially benefit a storage-side caching infrastructure architecture

Additional note: based on the numbers, the storage tier is significantly more costly of a resource than the server-tier.

Software-defined storage vs Server-side caching 

  • Software-defined storage is a separate storage target from your existing storage systems
  • Server-side caching  offloads I/O requests from your existing storage systems
  • Server-side caching could potentially benefit a software-defined storage infrastructure architecture

Scale-out storage vs Server-side caching 

  • Scale-out storage is a separate storage target from your existing storage systems
  • Server-side caching  offloads I/O requests from your existing storage systems
  • Scale-out storage is most frequently purchased to scale out both capacity and performance
  • Server-side caching provides additional performance offload to existing storage systems without the increase in capacity
  • Server-side caching could potentially benefit a converged infrastructure architecture

What can we conclude from all these details? 

Server-side caching really breaks the mold we have all internalized over the years.

You no longer need to consider upping your storage spend in order to manage the performance requirements of your environment.

What’s that mean for you and me? There is no silly questions as we all figure out how solutions – and which solution – fits best in your environment. Different products will continue to fit best in different use cases.

Keep asking questions.

 

desktone-plus-infinio

Today’s Lesson: Does Infinio positively or negatively impact FlexClones on Desktone?

[UPDATE] 2pm on March 12th, 2014

I learned a bit more about Infinio caching this morning from the CTO and co-Founder Vishal Misra, who you can follow on Twitter here. There are details of our caching I updated below

# End UPDATE

My favorite part of learning is the dialogue.

Not the lecture. Not reading the (f-ing) manual.

I love that moment when you grasp an idea well enough to pivot; to take a leap of faith by following your understanding to its logical ends.

If I understand that this does that, will it work well with this too?

Moments like these are when learned facts become knowledge: an idea becomes applicable to some ends.

Today I was asked to see if my facts were knowledge… and it took a little while. But I got there.

So here’s a question for you: knowing what you know of Infinio Accelerator, would it positively or negatively impact FlexClones on Desktone?

I first had to brush up on Desktone, which was just a few g\Googles away thanks to DesktoneBrian Madden and the NetApp community.

Said briefly, Desktone is a cool VDI product . FlexClone is a storage-based feature for copying files at the storage layer. Desktone can leverage FlexClone to offload virtual desktop provisioning. Makes sense.

So what’s the answer on how Infinio plays in their relationship?

FlexClone does its copying within the storage system itself, which is excellent for provisioning speed. Infinio won’t see this traffic given that it does not go through the virtualization layer and out through our Accelerator.

So could Infinio Accelerator speed up the time by which Desktone provisions new sessions utilizing FlexClone?

No.

That said, we will already have a copy of recently accessed files in our cache given that it is an exact copy of an existing system.

Since Infinio Accelerator’s global cache deduplicates by content, a new desktop would be nearly guaranteed to have frequently accessed files already populated in cache.

[Update]

As Peter discusses on the Grey Beards Podcast, Infinio maps addresses to a digest of what data is stored where in the cluster. The digest then points directly to the stored content in order for it to be retrieved upon request without further lookup.address-to-digest-to-contentThis design means that each new desktop spun up in a Desktone environment will need to just map its requested address to the digest we already have stored in cache.

# End Update

This sounds like a perfect case for how our product plays nicely with others.

With Desktone’s use of FlexClone, your storage system efficiently provisions new desktops that already have cached content inside Infinio. New desktops will benefit from deduplicated content in cache, increasing the effective cache size without requiring further DRAM.

It’s a win/win. I hope someone with this architecture reaches out for a trial so I can update this post will real-world numbers as well.

measuring-up

Open Discussion: Feature Comparison Across Server-side Caching Solutions

Vaughn Stewart inspired me with his post on the all-flash array marketHe outlined different solutions with a technical eye and discussed products out in the open.

To cut to the punchline, You can see the live spreadsheet here [UPDATED March 11th].

server-side-caching-side-by-side-03-11-14

We’re at a time of excitement for server-side caching and I see the need within the community needs a similar feature-centric comparison in this market as well.

So let’s start with the core tenant of server-side caching: each solution decouples storage capacity from storage performance by offload requests from the storage tier.

Storage is odd in that way. It’s both our capacity for applications — how much stuff we can store — and, often, the performance bottleneck for those same applications.

odd-couple-capacity-performance-server-side-caching

Server-side caching is the simple idea of caching content closer to the application. It’s byproduct is two-fold:

1. Relieve unnecessary requests sent to the overtaxed storage system, providing headroom and longevity

2. Accelerate applications by responding to requests more quickly (less latency) and more consistently (with less variability in latencies over time)

Said another way, I see two key elements to every SSC value proposition.

Headroom: Focused on forecasting how much workload you can afford to add to an existing system.

Acceleration: Focused on the benefit server-side caching has on a given workload or set of workloads.

odd-couple-capacity-performance-server-side-caching-decoupled

So what are your options as a consumer?

Conversations like these always run the risk talking marketing positions instead of facts. To avoid this outcome, I’ve had the good fortune of connecting with great minds in our industry who have kept me focused. Thanks are due to Jon Kohler, Josh CoenTim Antonowicz, and Chris Wahl.

Here is what I found through public resources and connecting with peers. Note that question marks are boxes I could not find answer to online.

You can see the live spreadsheet, which I’ll be updating along with this post, here.

There are some interesting observations to make:

  1. Most solutions require and then utilize flash hardware to build a cache
  2. There is not a great of information made public about how SSC solutions install or uninstall
  3. Only Infinio and PernixData offer deduplicated cache
  4. Only Infinio and Proximal Data offer globally distributed cache

But back to Vaughn for a moment. He begins his post with:

“The performance capabilities and datacenter/environmental benefits of All-Flash Arrays are widely understood.”

Now here is where our efforts greatly differ.

There’s still a significant portion of those who could benefit from server-side caching that are unfamiliar with the offering. Amongst these early observers, even fewer are familiar with vendor options.

Is it like server SAN? Does it compete with converged infrastructure? It’s a “no” in both cases, but I’m going to focus on getting into the granularity of these questions over the coming posts.

If there is one fact I want everyone to know by the end of this post, it’s that server-side caching solutions complement each other. Each of these products offer longevity of storage systems investment and even improvement to other SSC solutions.

tier-all-the-things

There is a great deal more efficiency to gain by offloading I/O further up the stack. My brilliant colleague, Peter Smith, often notes how there are tiers of disks in arrays and L1-L3 caches on CPUs, yet nothing similar across layers in the data center.

The ultimate goal is to move as much cache as possible as close to the application as possible without breaking the bank.

That’s where each offering may differ — cost, scalability, compatibility, ease of use.

With that, I hope this post starts a dialogue between those whom can benefit from SSC and those whom offer solutions.

If you see any inaccurate or missing details here, please comment below!

standing-desk-dummy

The Standing Desk is No Silver Bullet

There was a strong reaction from my social channels when I made the move to a standing desk a week ago.

It seems many are on the fence — is it a great idea or a silly trend?

standing-desk

I’ve been using my new design regularly for a week with a loose schedule as:

  • Standing 8-11am
  • Sit 11-1pm
  • Standing 1-3pm
  • Alternate 3-6pm depending

The practice has been nice – it breaks up the day with a little reminder popping up in my calendar until I get in the habit. I tend to put a stretch or two between these steps too.

There is no doubt it interrupts the day, but I see that as part of the benefit. The myth of sitting for 8 hours and being productive is exactly that. Dynamic movement helps me keep the day moving forward. I feel more aware of my body in a way I appreciate as someone who would like it to last.

The results have had an unfortunate side effect however. My wrists started to hurt by the end of the day, and pretty bad sometimes. It grew uncomfortable enough today that I knew something had to change.

So is the standing desk my savior from an early grave as I’ve read since 2011 or is it a false idol?

It’s How You Use It

A few googles got me to The Human Solution that highlighted my error. At a little over 6’3″, I was far below the recommended height for my keyboard and mouse.

Recommended Height for GiantsI added a few stacks of paper to the mix to test the theory out and I’m feeling so much better because of it.

standing-wristThe good news is that I feel great with my elbows at the right level. All the pressure on my wrists is gone, which is well worth the minor inconvenience it takes to move the paper out of the way when I site down.

Some may get to this point and say “well, of course it was.” It’s not like the table height is a secret.

That said, the act of standing at work comes recommended so heavily of late you can forget that you still need to do it right to reap the benefits.

I’m getting closer, but still not there yet.

If you have recommendations for products in a reasonable price range or schedules that you follow, please do share.

And remember to advocate for people to transition to a standing desk that’s sized correctly as opposed to just a standing desk in general.

flock-of-tweets

Quick Post: vCenter Server Appliance (VCSA) Up! Thanks to #Community

I had some significant user errors while setting up VCSA in lab environment.  The first go taught me the fifteen minute rule for appliances, which prevents me from digging deeper into a rathole.  This next round had a few benefits that are always worth repeating:

  • I started by referencing this fool-proof step-by-step is on the Internet thanks to Jonathan Frappier
  • To avoid further SSO errors, I went forward using the 5.5 VCSA instead of 5.1, which manages the 5.1 cluster perfectly
  • I created an A record in DNS  and ensured to have it update its associated pointer (PTR) record
  • I made a silly mistake, adding my default gateway address as my preferred DNS server, which lead me to this great troubleshooting post regarding VCSA and DNS

These notes got me through my installation issues. I had just one further tweak to make given my configuration.

I decided to get the most out of my small environment by reducing vCenter Server’s RAM down to 4GB.

The community got my back again:

Given the certainty of Tom, I didn’t belabor the point :)

It worked too.

So, a rhetorical question or two for you: 

Is there a great deal of documentation on how to configure vCenter Server Appliance? 

Yes. Absolutely.

Did I sincerely appreciate working through this installation by leveraging other people’s personal experiences? 

Yes. Absolutely.

Toward the end of this Twitter thread, Phillip Jones said it best:

fail-stamp

When Automation Fails

by Jeremy Wonson @virtualwonton

I had a little bit of an adventure last night at DFW, and it made me consider how careful we must be as we embrace automation.

So there I was, stopped between terminals on a SkyLink train at DFW airport with a handful of people I had never seen before. It was dark outside and with the tinted windows, we really couldn’t see a thing. The train had stopped for no apparent reason, since we weren’t at a station. I waited about 5 minutes before putting down my bags and peering out of the darkened windows, cupping my hands around my eyes like I was holding binoculars to block out the lights inside the train.

No one was coming to our rescue.

Everyone else just stood there and stared straight ahead, not like this was normal, but like they had no idea what to do. The tension was pretty thick, probably because we all felt a little helpless and many were trying to catch connecting flights. I decided I would take action, and used the emergency phone.

As I was waiting for the engineer, I was reminded of a consulting project I was fortunate to be a part of a few years ago. We were doing a Business Impact Analysis for a company that distributes products to a few national retail chains (household names). They touted the level of automation they had achieved using a variety of technologies (like EDI, RFID, etc).  This allowed them to streamline order placement and fulfillment, billing and payment processing, and improve their operations using concepts like just-in-time inventory. Automation helped them save money and offer better service to their customers. But what happens when it fails?

In the case of the distributer, they immediately said they could take orders the “old fashioned” way. The stores would contact the call centers and order over the phone. The customer service reps would write down the orders by hand and call the distribution centers and so forth. The financials would wait until the systems were back on line. They estimated they could do this for up to a week, if needed.

Of course, when we asked if the customer would know how to order without the computer, they weren’t sure. We were given the same answer regarding the customer service reps knowing how to take orders and alert the distribution centers. Long ago they were trained to take product orders, but most had not taken an order in years, if ever. When pressed, they weren’t even sure the reps had paper order forms anymore. They had a pretty reliable system, and the automation they enjoyed was now something they took for granted.

Within 5-10 minutes an engineer came to our rescue. He checked the train cars, determined the cause of the failure and within another 10 minutes he manually drove the train into the next station.

At that point the train starting working normally again. The guy next to me made a crack about the operator holding down CTRL-ALT-DEL and waiting for Windows to reboot.

Once the train resumed, everyone breathed a sigh of relief, and operations returned to normal.

I was reminded that with all the focus today on automation, it’s important to remember how to do things the “old fashioned” way. This isn’t a huge problem today, because the same technologists enabling the automation are the people who have been doing this work for years now. We have the experience. What about the next generation of IT professionals, the people we hire in the next 5-10 years? Those that begin their career with automation in place will not have the benefit of our experience unless we make documentation and training a priority.

Is the interruption of automation always a bad thing?

As I left the airport, I remembered I had to pay for parking using my credit card. Normally I just drive up to the gate, my toll tag is scanned, the gate opens and I leave – the receipt is available online. The reason I had to pay using my credit card is because I forgot to update my credit card information online. They couldn’t bill me for my tolls anymore.

This is a case where automation actually should force me to do something manual. I have to do it the “old fashioned” way for a very good reason. By the way, it left an impression. When I got home, I immediately updated my credit card info online. I like automation. I have a better experience as a consumer when it works for me.

When we’re automating IT, we need to keep this lesson in mind.

We don’t want to get so wrapped up in automating the user experience that we forget that there are cases when a person legitimately needs to interact with another person.

This means creating workflows with human error in mind, as well as making human intervention a part of your workflows where it makes sense. It also means creating processes to capture new and/or evolving requirements for your workflows.

Many times when clients want to talk about automation and orchestration, the human element is overlooked. Today that human element saved my bacon, giving me hope that the day will never come when human intervention has lost it’s value, when computers take over the world, when cyborgs travel back in time to…oh, sorry. For some reason a movie series popped into my head.

:-) Happy Automating!

Jeremy Wonson: Cloud Architect and vSpecialist focused on Service Providers and Systems Integrators. Known to enjoy automation, orchestration, SAP and bacon.

@virtualwonton
jeremywonson@gmail.com
jeremy.wonson@emc.com

simple-is-better

Help Tell the Infinio Story Through This Study

We have an open invitation to technical minds to provide feedback on our UIX. Click here to send an email right away and get in on the opportunity.

Before we get to the details, however, I want to talk about why I find this kind of research so important.

We are all storytellers, tracing invisible signals as they move from magnetic charge to electric current and back again.

This fact is often out of mind as we quickly build PowerPoints and shoot off emails. We see letters on screens while silicon calculates, liquids crystalize into crisp color and refresh rates flash faster than we can detect. I’m left amazed in the rare moments these facts comes to the forefront of my mind.

signal-vs-noise

And think of the consistency!

The system of protocols and physical layers between Point A and Point B are as numerous as they are reliable.

Yet day after day the screens I depend on work in the way I think they should.

We got to this state of simplicity through genius contributions. This is the point at which I make reference to Steve Jobs and the iPod, but there are far more unsung heroes of our user experience out there.

PEBKAC_by_zStag

And that’s most often where the disconnect happens.

Keeping track of that signal is easy for our computational systems, but us humans continue to need more: need different.

I respect those who try to make something simpler than it once was, especially without losing an eye for details.

There is a special kind of Technologist that works hard to make an experience simpler than it would be otherwise. A colleague of mine is one of those Technologists.

You should sign up for a study with him.

Whether you’re passionate about virtualization and/or storage or whether you just want $150, your input will go to continuing our storytelling.

________________________________________________________

From the team:

“We’re looking for IT admins to participate in a paid usability study.

The study takes about 1.5 hours and pays $150.  The study can be conducted at anytime and can be run over a WebEx or on-site.

A little bit about Infinio - we’re making a software-only storage performance product that will integrate with an existing ESX infrastructure.  We’re based right down the street from Kendall Square and recently released a 1.0 version of our interface.  In order to continue improving the product we’d like to get your feedback on our interface and see what you think.

Please contact Tom Rand (trand@infinio.com) and he will ask a few questions as well as work out scheduling.

TL;DR:
Pay: $150
Duration: ~1.5 hours
Location: On-site or remote
Agenda:  Run through various scenarios in our product’s interface to get your feedback”