Friday, 15 June 2012

Collaboration Collolary

Who in the real world doesn't have some IIS web sites?

Recently a new(ish) piece of software came to my attention.  Aptimizer.  These guys recently joined forces (got bought by) Riverbed.  Riverbed is a WAN optimiser.

Back to Aptimizer, the claim is that this little peice of software once installed can convert your IIS website, homebrewed or not, from a slow and painful experience (which takes a lot of developer effort :) ) into a blisteringly fast and fun one, with no developer involvement at all.

What?

I don't need to talk to the developers or the server team??  My site is just faster, despite the best efforts of the original web/server monkeys?  How is this magic done?!

It's quite simple really.  Taken from a browser perspective it is quicker to load less, larger items than it is to load lots of small ones.  Lets take IE for instance, it may make 10 connections to your IIS site, each connection can download one item.  A typical website may have around 50+ items it is pretty easy to do the maths.  So IE has gotta work pretty hard tearing down connections and making new ones (although this doesn't always happen and can cause even more performance issues!)

So reducing the amount of connection required can produce faster websites.  Couple this with whitespace and comment removal (comments?  Do developers read these?!  Your browser certainly doesn't!) which will shrink files further.

A pretty smart feature of Aptimzer is also Data:URIs this is the embedding of data (normally an image) into the HTML rather than sending an image file.  Nice.  Use this with sprites (group all images into one image, an uber-collage) and yet again more shrinking and reduced connections.

Not all websites work though.  I've attempted this on a couple of sites I have, one of which is the System Center Operations Manager site.  Not good.  You can fiddle around with the settings but I'm an out of the bag kinda guy.  So I haven't.

There is a special edition solely for Microsoft Office SharePoint Services sites.  Which is pretty much the market they are going for, Microsoft even use Aptimizer for themselves it seems!

Thursday, 24 May 2012

The Myth of Finger Prints

Something most established IT environments will have is a password policy.

This will say something like passwords lockout after 3 failed attempts, passwords must change every 3 months and meet certain character count requirements.

Whilst there is nothing outwardly wrong with having a password policy things have to move on, we no longer live in the 90s where we have one password and one device.

My suggestion would be to have password policies based on the impact of that password being breached against the likelyhood of that password being breached.

Or put it another way have some common sense around user passwords and treat them differently to service or secure passwords.

Sure have your company bullion locked behind a password that is 64 characters long, complex, locks after one failed attempt and changes every hour.  However the average user account shouldn't be treated the same.

Have a password rating based on the business impact of that password being breached.  This rating should determine what level of policy the password should have.

I hate the 3 strikes rule, irrespective of how many strikes you are allowed this is a denial of service attack on your own users. 

I don't like my password changing either, again this leads to issues where a remote user might not complete the password change process correctly and end up with out of sync passwords.  Add into this an iPhone or Android device connecting to email and you have a major service desk head ache.

I do like a long password which never changes or get locked out.

I read somewhere that a 14 character password will take a dedicated specialised application 5 seconds to crack when it has access to the hash of the password.  Interesting but the likelyhood is that a person who is after a user's password will find social engineering far more effective and practical.

I think we should encourage our users to create their own "algorithm" for password creation, set the password once, and never lock it.  Ok lets say change it once a year.  I would also throw the idea of not writing passwords down in to the bin.  Let your user own their own passwords.  They will write them down anyway!  You may find that people are more willing to choose complicated passwords if they feel they can write them down.

Here's a rubbish "starter for 10" on a user's algorithm....

SMTWTFS or JFMAMJJASOND

Obviously all this gumph is in your Security Policy right?  What do you mean you don't have one! ;)

Friday, 18 May 2012

Enterprise Mortality

Whilst drowning myself in internet blogs I remembered once saying something a long time ago...

"The xxx is not Enterprise ready"

Replace xxx with whatever new gadget was coming up...iMac, psion, newton whatever was around in the 90s.  This was used carte blanche as a pendulum axe which the hapless user would have to lay under should they want to connect their new shiny toy to the "Enterprise". 100% kill rate.

Recently I heard this repeated, yes I was ear wigging.

I am officially recinding my comment.  Well amending.  It probably was true in the 90s (when I said it!)

Now I believe the opposite is true.

"The Enterprise is not ready for xxx"

Far too often I hear my colleagues saying "no" to users requests.  This is not due to anything other than they are asking us for something we don't know how to fit into our enterprise, or - God forbid, hadn't even heard of.

I now think that, in IT terms, the Enterprise as we knew it is dead.  Or at least needs to be killed off.  It is the shackle that holds IT back in the 90s.  Yes we got to figure out what to do about consumer products and whatnot but unless we kill the Enterprise at least in our minds we are toast.

The Desktop Motion

Someone somewhere said, "Server virtualisation is soooo cool.  I bet we can do the same for desktops!"

And so began the marketing strategy to sell VDI to the Enterprise.  I think that was about 5 years ago.  Where's your VDI?

Now don't get me wrong.  I love VDI.  I just dont see it as a solution to any real problems the Enterprise has at the moment.  IT professionals are eating up the marketing Kool Aid and implementing VDI against very flimsy if indeed any business case.

What needs to happen first is a culture change towards desktop deployment.  Instead of seeing DD as a stepping stone for the apprentice/student/newbie, a kind of easy fix for finding something for someone with relatively little IT experience to do - change tapes anyone? 

Desktop delivery needs to be valued as I've said before a desktop is how users connect into the infrastructure.  Get this wrong and no manner of technical wizardry in the Data Center will numb the pain.  Too often is heard "I really love the new SharePoint site...when I can get to it"  Ok I was pushing it with SharePoint but you get the idea.

Anyways back on track.  VDI.  Basically if you have a good desktop, by that I mean, UAC is ON, users are NOT administrators, MyDocuments at least is redirected to a network share and you have a solid and reliable build process, MDT for example.  Then VDI could be implemented. 

However pushing VDI past implementation/pilot/IT and into the big wide world is an entirely different noodle salad.

I've been looking at the Citrix XenClient.  This is a bare metal or "type 1" Hypervisor for laptops.  This allows you to have a number of builds on one device.  Obviously the number of VMs is limited to the amount of memory you can cram into your laptop.

As with all VDI I still feel that XenClient is not ready for the main stream user base.  It is a useful tool for an engineer to be able to carry around various builds to differents sites, or for a testing team to check out how things may affect users in different domains by having different VMs connected to different domains but all in one laptop.

I guess I should mention that Citrix also has a server piece in this VDI puzzle which can synchronise the vm desktop back to the server.  Yeah whatever.

Mailbox determination

Exchange 2010 come replete with 4 mailbox types. 

Equipment
Room
User
Linked

There is not much between them to be honest, the Room mailbox has some extra attributes such as capacity but they are just mailboxes albeit with nice icons. 

The interest comes when the Calendar Attendant is used. 

On a standard User mailbox the attendant does some rudimentary house keeping *yawn* but on the Room and Equipment mailbox the attendant can approve or reject meeting requests.

This is really cool as now the Rooms or Equipment mailbox can effectively manage it's own calendar.  You can have a "real" person own the calendar too for fine tweaking but calendar conflicts can be managed by the organiser, they receive a kind email saying that the room is not avaliable during that time.

If the organiser is sensible it is also possible to add a Room or Equipment mailbox as an attendee of the meeting and use the Scheduling Assistant to see when the Room or Equipment is actually avaliable.

Unfortunately Microsoft have missed a trick in whilst you can search for a "resource" based on the extra attributes, like room capacity, it is clunky (the search and it is possible to jam a room full and the Room mailbox wont query this.  It would be nice if the Calendar Attendant was able to count the attendees (less itself of course) and decide on whether it is a suitable room.  Maybe feeding this back using the excellent MailTips..?

Another annoyance is that you cannot link resources so you cannot link the calendar of a projector to the calendar of a room.  This means you have to add both to the invite to book them.  It would be better to allow either to be booked seperately however the container also books the contained.  MailTips could be used here again to feedback that for an added room whether resources are avaliable.

One step at a time I guess.

Tuesday, 15 May 2012

Mobius Strip Conjecture

Streams. Streams. Sleep on a bed of streams.

Hmmm.  Streams.

So everyone knows about the AppV streaming applications to your doorstep.  Not many people know that Citrix has been able to do this long before AppV was snaffled up by the King of Snaffling, Microsoft.

Yes.  Citrix can stream applications - IIRC this has been possible since Presentation Server 4.  To do this you needed a seperate client called the offline client.  You also needed an online client to access all your online - or standard Citrix applications.  Pretty nasty. 

This then turned into one client called Receiver were you could extend the power of Receiver with plugins.

Again still not very nice as now Citrix has dained to drop support of GPO deployment, reading the next sentence you can see why...  So now we have an agent which is supported on Windows, Mac, Linux, iOS, Android and Blackberry however we have no reliable - read single, method of deploying it!

"Oh yes there is."

Ok you knew that was coming.

Merchandising server.  This is a virtual appliance which once setup can manage and deploy the required plugins and agents to your environment.  All devices can go to and sign into a website hosted by the appliance and receive the correct version of the Receiver agent and all the plugins which the device supports.

As I mentioned this is currently, online plugin, offline plugin, AppV plugin.  Hang on did I say AppV plugin?

It is possible to use Receiver to get you AppV applications, whilst this initially sounds redundant, "Why would I want Citrix in the way of AppV?"

Remember that there is no AppV client for MacOS, iOS, Android or Blackberry.

Cool.

Citrix have added, in keeping with the current trend, a storefront to the applications pool.  You can now select applications which you wish to use from a "pool" of approved applications.  No longer will you have countless prescribed applications hanging around in your desktop.

Couple this with the fact that Citrix have consolidated the Receiver interface across devices.  Receiver now looks the same no matter what device you use.

Did I mention seamless session state across devices too?  Work on a document on the train on your iPad get into work and log into your desktop and the application and document will "move" over to your desktop.

Excited.  Me.  Never.

The Star Nosed Mole Hypothesis

If you have done an "ipconfig /all" on your Windows 7 box lately you will have noticed that there are a lot more adapters avaliable then previously...they have odd names too.

 
Teredo Tunneling Pseudo-Interface, IP-HTTPS and 6to4 ISATAP ...hmmm interesting....possibly.  But in themselves they are not much more than how to get IPv6 traffic across networks using HTTP or IPv4...why would you want to do that?

 
Well IPv6 whilst quite old in technology terms hasnt been embraced as quickly or as widely as the "industry" would like.  This meant that technologies which act like glue (or transitional technologies as some like to call them) were put into Windows 7 to allow IPv6 to talk across IPv4.

Due to the requirements of Toredo and 6to4 (which need to be direct ip connections) means that these guys have a limited use, who doesn't do some sort of NAT on a router now?

However IPHTTPS is of use.  Consider that by default all IPv6 traffic is encrypted using IPSEC.  What you have now is a way of securely connecting to a network using IPSEC but without "knowledge" or a seperate client from ANYWHERE. 

Hang on isnt this just SSL VPN?  In a way yes.

But a VPN which:
  • needs no user interaction or extra passwords
  • is on by default - boot your machine to the network from ANYWHERE
  • GPO's deployment to remote users? Done. 
  • Patching Sales Force.  Done.
  • No specialised hardware (a backend Windows 2008 R2 server is required to terminate the connections, clients must be Windows 7) is required.
  • All you need is a certificate and a username/password on the domain and bingo. You dont even need a "corporate" PC.
Sounds like a magic bullet!

The catch?

Currently it is quite an involved process to get this working.  There are plenty of documents out on the web describing how to set this up.

Oh and the name (strangely Micrsoft have come up with something reasonable!):

DirectAccess

 

Thursday, 10 May 2012

Exodus Maneuver

Something which is pretty important to me is how desktop delivery is achieved. 

To the end user there is nothing more important than their desktop.  Without this no other services will work. 

Getting desktop delivery right is or should be our wholemeal bread and organic butter.

Vista bought about the revelation that is Windows Deployment Services, effectively killing the little used Remote Installation Services (RIS).  To be fair RIS is pretty clunky and was really showing its age, however was really effective in what it did.  Couple this with Microsoft Deployment Toolkit (MDT) and Windows Automatic Installation Kit (WAIK) and you have a realy powerful and above all usable deployment system.

Yes I know you can then add SCCM or SMS as it was into this mix to provide zero touch installs but really, who has the time ;)

All these tools are freely downloadable from Microsoft.

Ok so now you have a super slick way of building machines and you might have even investigated the very cool mehtod of driver injection/store to support various machines on one build.

One day a user come to you with an issue, it is clear that "Windows Rot" has set in, nothing short of a format c:\ and a rebuild will fix it.  So you locate a USB disk and copy all the "important/business critical" photos, music and other niff-naff from the C: drive - maybe you even have folder redirection to reduce this (a bit).

Luckily you are using "some" application virtualisation so the amount of locally installed software is compartively low however despite your best efforts this will never be zero.

Rebuilding that machine is completed and the data/applications and configuration is manually copied/installed or configured.

The user is likely to be without their laptop/desktop for a day maybe two.  Often knowledge of this "time without PC" (TWPC) puts the support team, or even the end user off using a rebuild as a viable option for fixing an issue - often called "Scorched Earth" policy, this will almost certainly result in time spent in a fruitless attempts to avoid the inevitable, or worse still the user "puts up and shuts up".

Having convinced the user that a rebuild is the only way you slave for hours meticulously recording data off the PC, rebuilding and then checking the data back on to the fresh build. 

Finally, bloody and sweating you hand the laptop/desktop back to the user only to hear not five minutes later that a valued/business critical piece of data has failed to make it through the PC->USB->PC process.

Ultimately the vast majority of the time (and mistakes) taken in a rebuild is due to the manual processes required.

Microsoft know this and have provided a pretty cool tool called USMT or in normal people talk, User State Migration Toolkit.

This is an xml (you can configure this is you are crazy enough) driven "business critical wedding photos" data copying machine.  Yes out of the box it does printers, office custom dictionarys, outlook nickname files and anything with a well known file extension.  And a whole host more.

Usage is pretty simple.  If you've already gone through the learning MDT/WDS and WAIK you pretty much know all there is to know about USMT too.

Where things get really cool is when you are using Windows 7.  Here USMT cuts down on the time to copy the user data by not copying the data at all!  Instead USMT 4 kicks in - USMT 3 is used for Vista, and uses hard links rather than shifting the actual data.  Very smart.

Take this one step further and use USMT and a refresh build rather than a replace build and you don't have to install applications either, the OS is just refreshed!

Finally "Windows Rot" is fixable (even avoidable?) and a "Scorched Earth" policy are practical and indeed you could encourage your users to rebuild more often as a matter of course to avoid issues rather than as a last resort.

The Greek Postulate

I was going to write about Kerberos but have been interrupted by a desire for tea.

- back -

Earl Grey if you are interested.  Black. No sugar.

Kerberos.  Google it and find it is the mythical two headed dog guarding the gates of Hades or something.  It is also a clever authentication mechanism developed by bods at MIT.

Windows 2008 has pretty much taken Kerberos to heart and is now the defacto authentication choice - NTLM anyone?  Infact Windows 2008 takes this a step further and runs Kerberos within the kernel improving speed and security whilst reducing complexity - you no longer need custom SPNs!!!

I'm probably getting ahead of myself.  How does Kerberos work?

The set up is pretty complex but bear with it.  To help I'll use a typical scenario, logging into you PC in the morning...

There are many parts/services to Kerberos.  But basically you have parts/services running locally on your client and parts running on the network.

You log into your PC the local clients' Kerberos service sends a ticket to the Kerberos service running on the network.

The network Kerberos service knows ALL passwords.  This service then adds a TGT to this ticket and sends this bigger ticket back to the client however the ticket is encrypted using the password associated with the username.  Sneaky. 

You must therefore have the correct password to decrypt and open the big ticket which would then mean you have access to the added TGT. 

The TGT or Ticket Granting Ticket is basically a way of telling the network Kerberos service that you have already sucessfully authenticated.  Once you have a TGT the network Kerberos will only ask for a TGT and not your username/password.

This is pretty cool as the password has not left the client PC, yet authentication has happened!

A TGT lasts as long as you are logged in or 8 hours (ish)

You can then use the TGT to request other tickets to access other services.  Printing for example.  These tickets expire pretty quickly (15mins).

Due to the reliance on time stamps to expire tickets, it is imperative that the clocks between clients and servers are in sync. 

One more thing to note is that Kerberos is a mutual athentication mechanism this means that both parties need to authenticate to each other; to print you need to provide a ticket to the print server but the print server also needs to provide you a ticket too.  Proving that the service you have asked for is actually that service.

So where in this do SPN or service principle names come in?

These are *just* common names used to access a service which are tied to the account running that service. 

So if we had a website we accessed using www.mysite.com and on the webserver the actual service was running under the credentials DOMAIN\webservice you would need an SPN for the HTTP service on www.mysite.com running as DOMAIN\webservice, in Windows this would mean:

SetSPN -A HTTP/www.mysite.com DOMAIN\webservice

Easy.

Windows 2008 R2 doesn't need SPNs as ultimately all kernel calls are done using one of system accounts and as you remember kerberos is now run from within the kernel.

For fun (or troubleshooting) you can use the cmdline "klist" on Windows to show all the current tickets you have!