The biggest file I’ve ever seen – 3Tb PUB.EDB

Well I haven’t seen this for myself, but I was sent a screenshot of it. Actually, it was 3 different Exchange public folder servers, each of which had ~3Tb of public folder data…

image

That’s scary and impressive in equal measure.

Reminds me of some of the stories people posted in response to my How does your Exchange garden grow? post nearly 3 years ago, on the Exchange Team blog…

Tips for optimizing Vista on new hardware

Ed Bott over at ZDNet posted a really interesting article yesterday, detailing the journey he had of making his friend’s brand new Sony Viao laptop work properly with Windows Vista Business. In short, his friend upgraded a trusty old XP Vaio to a new machine which came with Vista, but had a terrible experience of crashes, slow start up, bogging performance etc.

In a nutshell, the advice is pretty straightforward, at least for technically minded folk and backs up the experience of some of us who’ve been using Vista all through the beta program:

  • Start with Vista-capable hardware. It’s almost a waste of money trying to upgrade old PCs to run Vista. New machines which (supposedly) have been designed to run Vista with modern architectures, devices which have a good chance of having decent Vista drivers and enough horsepower to do it justice, are so cheap now, it’s hardly worth trying to tweak anything older than a couple of years old to get Vista working well on it.
  • Use the latest, best quality drivers you can. It still amazes me how many manufacturers ship machines pre-loaded with years-old device drivers, or (conversely), how many update drivers & BIOSes frequently but with poor attention to quality (the device driver certification program is there for a reason; if you have a piece of hardware that comes with a non-certified driver, you have to ask: if the manufacturer of the device cut corners in bothering to get it certified, where else did they trim savings?)

    I got a new Lenovo Thinkpad tablet a few months ago, and it was (and still is) a brilliant piece of kit. Lenovo have done a class-leading job of making it easy to keep everything up to date – including the system BIOS – in a single application, the ThinkVantage System Update. Think of that as a single app which already knows exactly what hardware you have, and checks the Lenovo site to see if there’s anything to update.

    I’ve had so many PCs where the vendor’s driver download page needs you to know everything about the internal bits of the hardware (Dell, stand up and be counted) – after choosing the machine type, why do I need to know which iteration of network controllers it has, or whether it’s got the optional super-dee-dooper graphics card or bog standard one? Can’t the manufacturer figure that out, especially if they ask for a serial number to help identify what the machine is?

  • Don’t put any unnecessary crapware on it. This starts off as a fault of the OEM who supplied the machine (sorry Dell, I have to single you out again, but you’re far from unique). It’s worth making sure you don’t install any old junk from the internet and leave it lying around on your machine. Ed Bott even suggests doing some basic installs (like Acrobat, Flash etc) then taking a full machine backup, so you can always revert to a nice starting point. Combine that with the Really Rather Good backup software in Vista (or even the Windows Easy Transfer software) which can make sure your data is safe, and it’s not unthinkable that every six or twelve months a savvy user could easily blow away the machine and recover the starting image & last data backup to be in a good state again.

    Most people accept that they need to service a car regularly to keep it running well – a modern PC is a good bit more complicated than a car (albeit with generally less terrible consequences if it all goes boom).

Part of Ed’s summary neatly encapsulates his thinking…

Well, for starters, Vista doesn’t suck. And neither does Sony’s hardware. That four-pound machine with the carbon-fiber case is practically irresistible, as my wife continues to remind me.

But when you shovel Windows Vista and a mountain of poorly chosen drivers, utilities, and trial programs onto that beautiful hardware without thinking of the customer, the results can be downright ugly. That was certainly the case with the early-2007 vintage Vaio, and it’s still true today, with too much crapware and not enough attention to quality or the user experience.

Tip for finding when an appointment was created

Here’s a tip for when you suspect someone has magicked up an appointment to coincidentally collide with an Outlook meeting request you sent them…

In your own calendar (and other people’s), you can see when a meeting was scheduled (ie request was sent or created), as well as other facts (like when you accepted it) – eg:

image

If a blocked out time in the calendar is just an appointment (ie something that was just put there by the owner of the calendar), you don’t see the date it was added…

image

Remember, they’re all just forms in the end 

Way back when Exchange was young (it started at 4.0), the design was that emails/meeting requests etc, were just an "item" (which is a collection of fields, different depending on the type of item it is), and a "form" which was associated with a particular kind of item using the Message Class to denote it.

In other words, an email message would have fields like Sender, date, recipients, subject, etc. And when you went to open a message, the Exchange client (later, Outlook) would look at the class on the item (IPM.Note, for a message) and would find the appropriate form to open that item. Clear? If you really want examples of lots of different Outlook items, see MSDN.

Anyway. If I’m looking at an appointment which wasn’t a "meeting" (ie it was just put into my or someone else’s calendar, not via a meeting request/acceptance), I might not be able to see the date it was created, but the underlying item definitely does have that property. Displaying it in Outlook is pretty straightforward, if a little contrived. Here’s one quick & dirty method of doing so (I may post a more elegant solution if there’s interest)…

image1. Get to "Design this form"

Older versions of Outlook had a Developer item on the menu structure which allowed you to select (via several pop-outs if I recall) to design the current form. Outlook 2007 simplified the menus (now using the Ribbon) and no longer shows that Developer menu. One quick way of putting it back is to add that specific command to the "Quick Access Toolbar"…

Click on little down-arrow just to the right of the Quick Access Toolbar on the top left of a form (eg the form of the appointment you’re looking at), then choose "More Commands"…

On the resulting dialogue, select Developer tab in the "Choose commands from:" drop-down list box, then scroll down to find "Design This Form" (note "This Form", not "a Form…". Select that command, click on Add, then OK out of the customize dialogue.

image

Now you have a little icon supposed to represent designing actions (pencil, ruler, set square) in your toolbar:

image

Click on the icon and you get into the form designer, with the current item being loaded. You’ll see a bunch of tabs – these correspond to "pages" within the form, and any in brackets are hidden. Select the "All fields" tab, choose Date/Time fields from the drop-down (or try "All Appointment fields").

image

You should now see just the date fields, including the original creation date…

image

This might seem a real palaver, but once you have the icon on the QAT, it’s a 5 second action to show the dates… and can be very handy 🙂

Imperialism, Metric-centricity and Live Search

I’m a child of a mixed up time when it comes to measures and the likes. I am feet and inches tall, stones and pounds heavy, when it’s cold outside, it’s below zero degrees, but when it’s hot, it’s in the 80s.

I learned small measurement in mms and cms, so have no real idea of how big an inch is, but long distances are thought of in miles (and petrol is bought in litres to go into a car which reports how many miles per gallon it’s getting).

Now and again, I’ll need to try & recall how many chains there are in a fathom, or ounces per metric tonne, and typically call on the services of a search engine. That used to be searching for something like:

image

… where we’d normally get taken to a site in the results, which has a wizard of its own to do the calculation. Often times, the reason I want to convert something is because I’m already doing a calculation and I just need to know the ratios involved…

Which is why I love the little innovation that Live Search introduced:

image

Right at the top of the results list, there you have it – dead right this is useful 🙂

Exchange 2007 clustering advice

I appreciate it’s been a while since I blogged last – a combination of "not much to talk about, really" with even more "no time to talk about it"… 🙁

Anyway, a few questions came in the other day from a reader:

SCR and CCR seems to work with SAN and DAS. When DAS (direct attached or local storage) is used, and it most probably it’s attached to the Active node, how does the Passive node function if it hasn’t got connection to the DAS/Local storage of the Active node?

In CCR, it’s important to realise that the passive node has its *own* set of disks, which contain its *own* copy of the data – doesn’t really matter if they are DAS or SAN disks (at least not conceptually). So, in a CCR failover scenario, the (as was) passive node switches to being the active node and uses its own copy of the database (which by now becomes the main one). SCR is different in the way failover happens, but in principle it’s similar – the secondary copy of the data is brought online and takes over servicing the clients, but using its own copy of their database.

-Some clients are indicating that having CCR or SCR one wouldn’t have a need for Backup of mailbox servers. Do you have any comments?

Absolutely not. That’s like saying, because my car has an airbag, I don’t need to wear a seatbelt. Check out the High Availability Strategies section of the Exchange documentation for more detail on the options.

Having CCR gives you the ability to fail over in effectively real time, for the purposes of planned maintenance or after an unexpected failure. SCR adds the possibility of having another replica of the data, potentially in a different location, which can be brought online through a manual recovery process (whereas CCR will bring the data back automatically, since it’s part of a cluster).

Backup is still important (What happens if you lose all servers? What about long-term archival of data?) There’s always the possibility that databases could be corrupted or infected in some way, and if that happened, the replica(s) of the databases would also likely suffer the same fate … so taking regular backups would give you the ability to roll back to earlier versions of the database.

There’s always the scenario where users delete some information that needs to be brought back sometime in the future – there are various options around item recovery with Exchange 2007, but if it was deleted (say) a year ago, then you’d be looking at a backup as the means of recovery.

Data Protection Manager would be worth looking into, to help with backup requirements – it allows you to take regular snapshots of a running server, which can later be spooled out to offline storage.

– In SCR, is there a bandwidth utilization estimate used for replicating the Active and Standby/passive node? I understand that in CCR and SCR the log sizes are reduced to 1MB from standard 5MB though.

The log files in Exchange 2007 are reduced from 5Mb to 1Mb anyway – partly because of CCR and LCR (and later SCR), but even if you don’t configure any of the replication technology, you’ll still be on 1Mb logs.

As far as how much bandwidth you’re going to need between nodes for the purposes of replication, well that depends – if your servers are very busy, then they’ll obviously need to shift more data, and latency will come into play.

There is a detailed section in the Exchange TechCenter online documentation which covers planning for replication at a hardware, software configuration and network level.