Identity & presence: the key to anyone’s Unified Communications strategy

 I spend a lot of time talking with customers about what Microsoft is doing with various new technologies, mostly involving or revolving around the Unified Communications stuff with OCS and Exchange. It’s really interesting to see how many people just “get” the point of UC technology, whereas others are either blind to its potential, or even doing the fingers-in-ears, shut-eyes, repeating “no, no, no” denial that a lot of this stuff is coming whether they like it or not.

I don’t mean that software companies are somehow going to compel everyone to adopt it, more that end-users themselves will be expecting to use technology at work which they have grown used to at home. For several years now, it’s been typical that people have better IT at home than they’d have in the office – from faster PCs, bigger flat screens, to the software they use – it’s exactly this kind of user who has driven the growth of services like Skype, and possibly helped shape the way enterprises will look at telecoms & communications in the future.

Various pieces of research, such as Forrester Groups’ 2006 paper on “Generation Y” types (as reported at TMC.Net), predict that people who were born in the 1980s and beyond, are adopting technologies into their lives faster than previously… and as those same “Millenials” are making their way into the workforce, they’re bringing their expectations with them, and possibly facing the “Computer says no” attitude that some, er, older, IT staff might still be harbouring.

Instant Messaging concerns

It’s already been reported that teens use IM more than email so it seems inevitable that IM will come to the enterprise one way or another. Some enterprises have turned something of a blind eye to “in the cloud” IM services such as Windows Live/MSN Messenger, AOL, Yahoo, Google Talk etc. Others have actively shut down access to these services by blocking firewall ports. Both of these approaches will need, at some point, to be re-evaluated or formalised through acceptable use policies etc – just as businesses in the past didn’t give users internet access or even email, due to concerns that they’d just waste all their time chatting, or the threat to security of opening up to the world.

In reality, users will waste time on IM initially, just like they’ll possibly spend worktime surfing the web or playing Solitaire on their PC, but sooner or later they’ll get over the novelty and start using the technology to be productive, and even if they still “play” during working hours, the net effect will be positive.

IM as email reduction strategy

Many people agree that they get too much email, and that culturally, email is used when it would be better to pick up the phone or talk to someone face-face. IM can reduce the volume of email sent, not just for the disposable communication (the “have you got a minute?” type) but for the fact that people who are not online at the time, don’t tend to get IM. It’s all too easy to blast an email out to a group, asking for help – now, when the people in that group who’ve been out of the office next log in, they’ll get your request … even though your problem may well have been solved by now. That just doesn’t happen with IM, and some customers I’ve talked with estimate that adoption of enterprise IM sees a >50% drop in internal email volumes.

Presence is the magic ingredient

What makes IM useful is the “presence“: the knowledge of who, in the company (even, possibly, people you haven’t ever added to a contact list like you’d need to do in the public services), is available and in a position to respond to you. Cliff Saran of Computer Weekly wrote a blog post recently which was scathing of presence, but illustrates a fundamental lack of understanding of what it “is”:

Yes it’s fine to be able to know that someone is free, but it relies on the user having to update their Presence each time they walk over to the coffee machine, have a chat and a laugh with a colleague, go to the toilet, leave for the train, get home, go to the pub, have dinner, watch TV and go to bed.

— “Microsoft’s unified productivity killer“, Cliff Saran, 28th August 2007

Sorry Cliff, but you’re about as far wrong as it’s possible to get without changing the subject entirely. The whole point of presence is that it’s something the user shouldn’t have to worry about. And if they want to, they can. Culturally, some people won’t want to use the technology at all, which is fine… though sooner or later they may realise they’re losing out, and come back to the party.

image

I start my PC up, and if it finds a network, Office Communicator logs in and sets me to be online. When my Outlook calendar says I’m busy, my presence changes to “In a meeting”. When I pick up the phone, it’s “In a call”, all done automatically.

When I lock my screen (as I’d do – WindowsKey+L – any time I’m away from my desk for more than a few seconds), my status goes to “Away”, and restores when I log back in. If I just walked away without locking, after 5 minutes, I’d be “Inactive” then 10 minutes later,  it would be “Away” (at least that’s the default timeouts and behaviour… they can be tweaked). And all the while, by clicking that big coloured button in the top left, I can over-ride the automatically set presence and do it myself. Or even sign out.image

As well as controlling what my own status is (and by extension, how phone calls will be routed to me and when), I can also set what level of information I’m prepared to share with others – from allowing select people to interrupt me even when I’ve set “Do not Disturb”, to blocking people from even seeing that you’re online altogether.

Presence and UC telephony

Look at the strategies of any IT or telecoms company who’s involved in this space: finding a user (based on some identity, probably not just their phone number) and seeing their presence is a key part of the value of UC. Making it integrated into other applications and devices the user is working with, and giving the user the choice to use it or not use it as they see fit, is vital to the success of presence being adopted and embraced (rather than rejected by users as big brother-ism or invasion of privacy).

The Return of Exchange Unplugged

In late 2005, to prepare for Exchange 5.5 going out of support (and to help customers understand what was involved in moving up to Exchange 2003), we did a really well received tour of the country arranged around the theme of “Exchange Unplugged“.

We all wore “tour T-shirts” (in fact, every attendee got one), and keeping with the theme, I even carried my acoustic guitar and provided musical accompaniment at the start of each session. The nearest I’ll ever get to being paid to play music, I don’t doubt.

Anyway: we’re doing it all again! With 8 “gigs”, session topics titled:

  • Warm up act & welcome
  • Architecture Acapella
  • Migration Medley
  • Email & Voicemail Duet
  • Mobility Manoeuvres in the Dark
  • Y.O.C.S. (that’s about Office Communication Server).

… it’s clearly no ordinary event. Come along and see Jason try to squeeze into the tour shirt without looking like Right Said Fred, or find out if the YOCS session is presented wearing a stick-on handlebar moustache and leather hat.

Dates:

The Joy of Mapping

We all tend to take maps for granted. In the 17th/18th centuries and even beyond, there were decent sized areas of the world which were just being explored and mapped for the first time. Now, the ease of access to cartographical data means we don’t much give them a second thought.

I bought a couple of Ordnance Survey Explorer maps the other day, and was quite surprised at how expensive they are – £7.99 each – and started wondering if they were worth the money, when I could just go ahead and get data online for free. There’s something unique about poring over a real map, though: not necessarily looking for anything, just finding out what’s there. A neighbour came round at one point when I was looking through my new maps, and said that (like I did), he used to sit in the car as a passenger and study the maps around the places they were driving through. He even used to take the Atlas of the World to bed and just look at it, which I figured was a bit weird and best not discussed any further.

Thinking about how accessible mapping information has become brings a few interesting points up, though: Ordnance Survey maps are actually pretty good value given that they must cost a fair bit to print and distribute, and if you’re out on a walk or cycle in the middle of the country, knowing that you could get a decent aerial view from Google Earth or Windows Live Local might not be of any use, whereas a good map in your pocket makes all the difference.

image

Meanwhile, I’ve become a big fan of Windows Live Mobile, especially after bonding my CoPilot bluetooth GPS receiver with the Smartphone (tip: it’s a BTGPS3 unit, and the passkey is unfathomly set to 0183 by default).

I’ve also used CoPilot for Smartphone as an in-car GPS/navigation aid, and it works really well (even if you don’t have a place to mount the phone properly, it can bark instructions from the passenger seat, just like a real navigator or navigatrix would). There are also lots of other fun apps (like Jason’s favourite, SportsDo) which can use GPS to record where your device has been – for later analysis on your PC. Or here, a developer at MS has built a real-time GPS locator which sends his coordinates back to a web service on his PC, so his family can see where he is all the time. Spooky, maybe…

Autoroute vs online maps

I remember when the application Autoroute first came out, in the early 1990s: it was an old DOS application which shipped on floppy disks, and cost hundreds of pounds at the time. The target audience was fleet delivery managers and the likes, who would generate route plans for the drivers rather than have the trucks wandering their own route and taking longer/using more fuel than might be optimal. So even though Autoroute cost a lot of money, it could save a lot of money and was considered funds well spent.

Microsoft bought the company who made Autoroute, and released the by-now-Windows-application for a much more reasonable price. Autroute 2007 retails today for about £50, and with a USB GPS receiver, £85.

image It’s quite interesting now that Autoroute 2007 has direct integration with Windows Live Local – so you can find somewhere on Autoroute, then search the web for information about local businesses, or view the aerial/hybrid views from that point. It’s obvious to think that future evolutions of Windows Live Local might offer more of the route planning stuff that Autoroute is so good at, though UI-wise it could be more of a challenge…

Currently, Windows Live Local doesn’t offer the ability to do more than a simple “drive from here/to here” route – there’s no waypoints, no “avoid this area” type functionality. Google Maps does offer some of these things but it’s not quite as slick as Autoroute for now.

Rather than loading up Autoroute, though, it’s often quicker to go straight to the likes of Windows Live Local and zoom to a place you’re looking at (maybe you’re thinking of buying a house, for example – the single most useful aspect of this technology if my experience of house hunting last year is at all typical), so the usage patterns of all these types of applications is changing as the technology gets better.

One cool and current use of mapping technology is Bikely.com, which uses Google Maps to do routes that a user can draw or import from GPS devices, then share with others. Still has a long way to go functionality-wise when it comes to smart route planning, but it’s easy to use to do the basics, and is a good portent of things to come.

The process of testing Halo3

I came across a fascinating article on Wired which looks into some of the processes that Bungie, the developers of the Halo game series for Xbox and Xbox360, have been using to test the latest iteration, Halo 3.

image

Thousands of hours of testing with ordinary game players has been recorded, and the footage synchronised so the developers can replay what was on the screen, what the player’s face looked like, and what buttons were being pressed, at any point. They even record details of where and how quickly the players are getting killed, so they can figure out if some bits of the game are just too hard.

There have been a series of test waves of Halo3, some of which were for MS and Bungie employees only, and one public beta phase. The public beta was itself unprecedented in scale – over 800,000 users, racking up 12 million hours of online play. Do the maths, and that’s about 1,400 years of continuous play…

The internal only tests have been kept necessarily confidential (“It is not cool to talk about the alpha” was a common warning to anyone who thought about leaking screenshots or info). The final phase of testing is underway and the game is said to be very nearly complete.

I’m not going to mention any more about how the game looks, sounds, plays – except to say that you’ll all be able to find out just how awesome it is, on the 26th September (in Europe – the US gets it on the 25th). Might as well book the 27th and 28th off as holidays already 🙂 

Playing with Roundtable prototype

I’ve been looking forward to Roundtable coming out… it’s a very interesting type of hybrid between a standard conference speakerphone and a series of web-cams, all tied together by plugging it into a PC and running the new LiveMeeting 2007 client software.

The concept of Roundtable is quite simple really – put it in a room with a round table in the middle, and people who join the meeting online will see a panoramic view of what’s going on in the room, with the “active speaker” being identified in software based on where the sound is originating from. Other participants not in the room can be the active speaker too, if they have a webcam attached.

I got my hands on a prototype device the other day to have a play with (so I could figure out how to talk to my customers about it), and gathered a bunch of others in the same room…

image

We messed about for half an hour or so, and recorded the whole meeting – resulting in a series of files about 2Mb per minute, including the surprisingly high quality video. The first picture above shows me just stretching my arm around the device, and caused great hilarity like some kind of freaky Mr Tickle was sitting in the room.

Mark Deakin (UC product manager in Microsoft UK, and also featured as the active speaker in the picture above (on the left), was trying to emulate “Brother Lee Love” from the Kenny Everett TV show from the 80s…

image

The quality was very good, and once we start using these things in anger, the novelty of the camera will soon wear off and it’ll be useful for real business purposes… 🙂

I have to say, I was very prepared to be underwhelmed (ie the risk of over-promising and under-delivering seemed on the high side), but instead I was blown away by the Roundtable (even though the device itself could probably benefit from a number of physical improvements…)

I can’t wait for them to be deployed around our campus now!

The Roundtable user guide & quick reference card have already been published, and the device should be available through your local Microsoft subsidiary, in the next few months.

The business case for Exchange 2007 – part III

This is a continuation of an occasional series of articles about how specific capabilities of Exchange 2007 can be mapped to business challenges. The other parts, and other related topics, can be found here.

GOAL: Lower the risk of being non-compliant

Now here’s a can of worms. What is “compliance”?

There are all sorts of industry- or geography-specific rules around both data retention and data destruction, and knowing which ones apply to you and what you should do about them is pretty much a black art for many organisations.

The US Sarbanes-Oxley Act of 2002 came to fruition to make corporate governance and accounting information more robust, in the wake of various financial scandals (such as the collapse of Enron). Although SOX is a piece of US legislation, it applies to not just American companies, but any foreign companies who have a US stock market listing or who are a subsidiary of a US parent.

The Securities Exchange Commission defines a 7-year period for retention of financial information, and for other associated information which forms part of the audit or review of that financial information. Arguably, any email or document which discusses a major issue for the company, even if it doesn’t make specific reference to the impact on corporate finance, could be required to be retained.

These requirements understandably can cause IT managers and CIOs to worry that they might not be compliant with whatever rules they are expected to follow, especially since they vary hugely in different parts of the world, and for any global company, can be highly confusing.

So, for anyone worried about being non-compliant, the first thing they’ll need to do is figure out what it would take for them to be compliant, and how they can measure up to that. This is far from an easy task, and a whole industry has sprung up to try to reassure the frazzled executive that if they buy this product/engage these consultants, then all will be well.

NET: Nobody can sell you out-of-the-box compliance solutions. They will sell you tools which can be used to implement a regime of compliance, but the trick is knowing what that looks like.

Now, Exchange can be used as part of the compliance toolset, and in conjunction with whatever policies and processes the business has in place to ensure appropriate data retention is put in place, and that there is a proper discovery process that can prove that something either exists or does not.

There are a few things to look out for, though…

Keeping “everything” just delays the impact of the problem, doesn’t solve it

I’ve seen so many companies implement archiving solutions where they just keep every document or every email message. I think this is storing up big trouble for the future: it might solve an immediate problem of ticking the box to say everything is archived, but management of that archive is going to become a problem later down the line.

Any reasonable retention policy will specify that documents or other pieces of information of a particular type or topic need to be kept for a period of time. They don’t say that every single piece of paper or electronic information must be kept.

NET: Keep everything you need to keep, and decide (if you can) what is not required to be kept, and throw it away. See a previous post on using Managed Folders & policy to implement this on Exchange.

Knowing where the data is kept is the only way you’ll be able to find it again

It seems obvious, but if you’re going to get to the point where you need to retain information, you’d better know where it’s kept otherwise you’ll never be able to prove that the information was indeed retained (or, sometimes even more importantly, prove that the information doesn’t exist… even if it maybe did at one time).

From an email perspective, this means not keeping data squirreled away on the hard disks of users’ PCs, or in the form of email archives which can only be opened via a laborious and time consuming process.

NET: PST files on users’ PCs or on network shares, are bad news for any compliance regime. See my previous related post on the mailbox quota paradox of thrift.

Exchange 2007 introduced a powerful search capability which allows end user to run searches against everything in their mailbox, be it from Outlook or a web client, even a mobile device. The search technology makes it so easy for an individual to find emails and other content, that a lot of people have pretty much stopped filing emails and just let them pile up, knowing they can find the content again, quickly.

The same search technology offers an administrator (and this would likely not be the email admins: more likely a security officer or director of compliance) the ability to search across mailboxes for specific content, carrying out a discovery process.

Outsourcing the problem could be a solution

Here’s something that might be of interest, even if you’re not running Exchange 2007- having someone else store your compliance archive for you. Microsoft’s Exchange Hosted Services came about as part of the company’s acquisition of Frontbridge a few years ago.

Much attention has been paid to the Hosted Filtering service, where all inbound mail for your organisation is delivered first to the EHS datacentre, scanned for potentially malicious content, then the clean stuff delivered down to your own mail systems.

Hosted Archive is a companion technology which runs on top of the filtering: since all inbound (and outbound) email is routed through the EHS datacentre, it’s a good place to keep a long-term archive of it. And if you add journaling into the mix (where every message internal to your Exchange world is also copied up to the EHS datacentre), then you could tick the box of having kept a copy of all your mail, without really having to do much. Once you’ve got the filtering up & running anyway, enabling archiving is a phone call away and all you need to know at your end is how to enable journaling.

NET: Using hosted filtering reduces the risk of inbound malicious email infecting your systems, and of you spreading infected email to other external parties. Hosting your archive in the same place makes a lot of sense, and is a snap to set up.

Exchange 2007 does add a little to this mix though, in the shape of per-user journaling. In this instance, you could decide you don’t need to archive every email from every user, but only certain roles or levels of employee (eg HR and legal departments, plus board members & executives).

Now, using Hosted Archive does go against what I said earlier about keeping everything – except that in this instance, you don’t need to worry about how to do the keeping… that’s someone else’s problem…

Further information on using Exchange in a compliance regime can be seen in a series of video demos, whitepapers and case studies at the Compliance with Exchange 2007 page on Microsoft.com.

Sometimes, you know you didn’t pay enough

… (and sometimes you probably suspect you paid too much)

A common trait in western cultures is the eye for a good deal – you know, getting two-for-the-price-of-one, or thinking that it’s worth buying something because it’s on sale and you’ll save 25%, rather than because you really need it or wanted it beforehand.

I saw a quotation the other day which set me thinking… John Ruskin, a leading 19th-century English artist, all-round intellectual and writer on culture & politics, said:

“There is hardly anything in the world that someone cannot make a little worse and sell a little cheaper, and the people who consider price alone are that person’s lawful prey. 

It is unwise to pay too much, but it is also unwise to pay too little. 

When you pay too much, you lose a little money, that is all. When you pay too little, you sometimes lose everything because the thing you bought is incapable of doing the thing you bought it to do. 

The common law of business balance prohibits paying a little and getting a lot… It can’t be done.

If you deal with the lowest bidder it is well to add something for the risk you run. 

And if you do that you will have enough to pay for something better.” — John Ruskin (1819-1900)

This is something that maybe executives at Mattel toys are mulling over right now, but it’s probably a valuable lesson to any consumers about the risk of going for the absolute cheapest in every sense, regardless of price point.

There’s probably an economic principle to explain all this, but I’ve no idea what it’s called

As it happens, I’ve been getting back into cycling recently and that’s required me to spend a great deal of time and money poring over bikes & accessories, whilst learning about all the differences between manufacturers, model ranges etc.

In short, they’re all much of a muchness. Just like computers, consumer electronics, or cars – is last year’s model really so inferior to the all-shiny new one, that it’s worth paying the premium for the up-to-date one? And how can a single manufacturer make such a huge range of related product and still retain its aspired brand values? (quality, excellence, durability, performance, blah blah blah)

I’ve pretty much come to the conclusion that for any individual at any point in time, there is a point where whatever it is you’re looking at is just too cheap, too low-spec for your needs. Sure, I can buy a A graph for illlustrative effectmountain bike for £50 in supermarkets or junk shops, but it’ll be heavy and not as well screwed together as a more expensive one I might get from a good cycle shop.

There’s a similar principle in all sorts of consumer areas – like wine, as another example. It’s possible to buy wine at £3 a bottle, but it’s going to be pretty ropey. £5 and up and you start getting really noticeable improvements – maybe a £6 bottle of wine could be considered 5 times better than a £3 bottle, though it’s unlikely that this will carry on – at some point, you’ll pay double and the more expensive product will hardly be any better to most people, but for someone, that might be the mid-point in their curve which would stretch from too cheap at one end, too expensive at the other, with a nice middle flat bit where they really want to be.

The far end of that curve would be the point where buying something too expensive will be wasted – if I only need the mountain bike to go to the shops on a Sunday morning for the newspapers, I could do without a lot of the lightweight materials or fancy suspension that a better bike would have. Ditto, if I’m an average cyclist, I won’t need a top-of-the-range carbon bike since it won’t make any difference to my “performance” (though try saying that to all the golfers who regularly sink their salaries into buying all the latest kit, without having any meaningful impact on their game).

Maybe it won’t be “wasted”, but I just won’t have any way of judging compared to other products in its proximity – if I’m in the market for a MINI and yet looked at the comparative price difference of a Ferrari and an Aston Martin, I wouldn’t rationally be able to say that one is better and worth the premium over the other.

So what does any of this have to do with software?

A two-fold principle I suppose: on one hand, maybe you don’t need to buy the latest and greatest piece of software without knowing what it will do for you and why. Or if you do buy the new version, have you really invested any effort into making sure you’re using it to its maximum potential?

Look at the new version of Microsoft Office, with the much-discussed “Ribbon” UI (actually, this link is a great training resource – it can show you the look of the Office 2003 application, you click on an icon or menu item, and it will take you to the location of the same command in the new UI).

The Ribbon scares some people when they see it, as they just think “all my users will need to be re-trained”, and they maybe ask “how can I make it look like the old version?”.

The fact that the Ribbon is so different gives us an excellent opportunity to think about what the users are doing in the first instance – rather than taking old practices and simply transplanting them into the new application, maybe it’s time to look in more depth about what the new application can do, and see if the old ways are still appropriate?

A second point would be to be careful about buying software which is too cheap – if someone can give it away for free, or it’s radically less expensive than the rest of the software in that category, are you sure it’s robust enough, that it will have a good level of backup support (and not just now, but in a few years’ time?) What else is the supplier going to get out of you, if they’re subsidising that low-cost software?

Coming back to Ruskin: it’s quite ironic that doing a quick search for that quote online reveals lots of businesses who’ve chosen it as a motto on their web site. Given that Ruskin was an opponent of capitalism (in fact he gave away all the money he inherited upon his father’s death), I wonder how he would feel about the practice of many companies using his words as an explanation of why they aren’t cheaper than their competitors?

Keep the Item count in your mailbox low!

I’ve been doing a little digging today, following a query from a partner company who’re helping out one of their customers with some performance problems on Exchange. Said customer is running Exchange 2000, and has some frankly amazing statistics…

… 1000 or so mailboxes, some of which run to over 20Gb in size, with an average size of nearly 3Gb. To make matters even worse, some users have very large numbers of items in their mailbox folders – 60,000 or more. Oh, and all the users are running Outlook in Online mode (ie not cached).

Now, seasoned Exchange professionals the world over would either be shrugging saying that these kind of horror stories are second nature to them (or fainting at the thought of this one), but it’s not really obvious to the average IT admin *why* this kind of story is bad news.

When I used to work for the Exchange product group (back when I could say I was still moderately technical), I posted on the Exchange Team blog (How does your Exchange garden grow?) with some scary stories about how people unknowingly abused their Exchange systems (like the CEO of a company who had a nice clean inbox with 37 items, totalling just over 100kb in size… but a Deleted Items folder that was 7.4Gb in size with nearly 150,000 items).

Just like it’s easy to get sucked into looking at disk size/capacity when planning big Exchange deployments (in reality, it’s IO performance that counts more than storage space), it’s easy to blame big mailboxes for bad performance when in fact, it could be too many items that cause the trouble.

So what’s too many?

Nicole Allen posted a while back on the EHLO! blog, recommending 2,500-5,000 maximum items in the “critical path” folders (Calendar, Contacts, Inbox, Sent Items) and ideally keep the Inbox to less than 1,000 items. Some more detail on the reasoning behind this comes from the Optimizing Storage for Exchange 2003 whitepaper…

Number of Items in a Folder

As the number of items in the core Exchange 2003 folders increase, the physical disk cost to perform some tasks will also increase for users of Outlook in online mode. Indexes and searches are performed on the client when using Outlook in cached mode. Sorting your Inbox by size for the first time requires the creation of a new index, which will require many disk I/Os. Future sorts of the Inbox by size will be very inexpensive. There is a static number of indexes that you can have, so folks that often sort their folders in many different ways could exceed this limit and cause additional disk I/O.

One potentially important point here is that any folder, when it gets really big, is going to take longer to process when it fills up with items. Sorting or any other view-related activity will take longer, and even retrieving items out of the folder will slow down (and hammer the server at the same time).

Oh, and be careful with archiving systems which leave stubs behind too – you might have reduced the mailbox size, but performance could still be negatively affected if the folders have lots of items left.

Laptop melts, for once it wasn’t the battery

Here’s a funny – it happened a while back, but I was sent a link to this story today. The author kept her laptop in the oven when she wasn’t at home, since it was a high-crime area and it seemed a non-obvious place for a laptop to live…

Postmeltdown2_1Then one day she came home and her partner was cooking french fries… and presumably hadn’t looked in the oven before switching it on 🙂

I suppose it makes a change that the laptop was melted by external factors, rather than the battery causing some internal pyrotechnics.

Even more amazing: the thing booted up and worked just fine!

OCS2007 trial edition now available

If you want to get your hands on trial software for the recently-released Office Communications Server 2007 and its client, Office Communicator 2007, then you’re in luck…

Bear in mind that these trials are for tyre-kicking and lab testing only – don’t put them into full blown production. They will also expire in 180 days, though can be upgraded to the released and fully supported code.