The last couple of decades have seen a revolution in user apps which offer location awareness and guidance. Automotive sat-navs were available some years ago, dating back to Honda’s electro Gyro-cator (now that’s a name) in 1986. CD and HDD based satnavs in cars became available over the years since, but typically were many thousands of dollars/pounds/etc as an option.
Google Earth was first launched in 2001 as a desktop app, and Google Maps followed in the browser, a few years later. Microsoft launched “Virtual Earth” shortly after that, though it was initially more like “Virtual North America” as its global coverage was very lacking. Over time, Bing Maps launched a bunch of innovative services, like Birds Eye, which used licensed 3rd party images from spotter planes to stitch together a “45 degree” view rather than the typical straight-overhead aerial view.
The source data for Birds Eye is a little out of date in some areas – though is still being updated in, er, North America (eg. see here and here), and maybe in other areas over time too. Point Birds Eye at Microsoft’s UK campus, and it shows Building 5 under construction, so the images are at least 8 years old, though since they no dates other than “© 2020”, there’s no obvious way to tell.
Google’s Street View shows the dates of images if there are multiple – click the down arrow next to “Street View” in the top left to view the history.
as well as rowing back some of the nagging to get Edge browser users to move to Chrome, Google released Google Earth in the browser – it’s maybe not quite so smooth as the desktop app, but it’s quick to use – … see Microsoft UK’s TVP campus, here.
The Washington Post reports that Google changes the view of maps depending on the country the user is in, removing disputed borders and the likes – so it’s a complicated world. According to that same article, Bing Maps is a very minor player in map usage, with Apple Maps (after an inauspicious start) has grown to be the second-most-used mapping platform, due to mobile usage, either on the Maps app directly or via other 3rd party apps which use location-awareness from the mobile device.
Bing Maps is used in many online services and other apps, however – like Microsoft’s forthcoming reboot of Flight Simulator, which supposedly features every airport in the world and uses data from Bing Maps, real-time weather reports and rendering in Azure, to provide a realistic flying view. There are some amazing videos on the Flight Simulator channel.
This week has seen the Microsoft developer conference, called //build/ in its current guise, take place in “Cloud City”, Seattle (not so-called because it rains all the time – in fact, it rains less than in Miami. Yeah, right). Every major tech company has a developer conference, usually a sold-out nerdfest where the (mostly) faithful gather to hear what’s coming down the line, so they know what to go and build themselves.
Apple has its WWDC in California every year (for a long time, in San Francisco), and at its peak was a quasi-religious experience for the faithful. Other similar keynotes sometimes caused deep soul searching and gnashing of teeth.
The Microsoft one used to be the PDC, until the upcoming launch of Windows 8 meant it was time to try to win the hearts & minds of app developers, so //build/ became rooted in California in the hope that the groovy kids would build their apps on Windows and Windows Phone. Now that ship has largely sailed, it’s gone back up to the Pacific North West, with the focus more on other areas.
Moving on from the device-and-app-centric view that prevailed a few years back (whilst announcing a new way of bridging the user experience between multiple platforms of devices), Build has embraced the cloud & intelligent edge vision which cleverly repositions a lot of enabling technologies behind services like Cortana (speech recognition, cognitive/natural language understanding etc) and vision-based products such as Kinect, HoloLens and the mixed reality investments in Windows. AI took centre stage; for a summary of the main event, see here.
The cloud platform in Azure can take data from devices on the edge and process it on their behalf, or using smarter devices, do some of the processing locally, perhaps using machine learning models that have been trained in the cloud but executed at the edge.
With Azure Sphere, there’s a way for developers to build secure and highly functional ways to process data on-board and communicate with devices, so they can concentrate more on what their apps do, and on the data, less on managing the “things” which generate it.
Back in the non-cloud city, Google has adopted a similar developer ra-ra method, with its Google I/O conference also taking place in and around San Francisco, also (like WWDC and Build) formerly at Moscone. It happened this past week, too.
Like everyone else, some major announcements and some knock-em dead demos are reserved for the attendees to get buzzed on, generating plenty of external coverage and crafting an image around how innovative and forward thinking the company is.
Google Duplex, shown this week to gasps from the crowd, looks like a great way of avoiding dealing with ordinary people any more, a point picked up by one writer who called it “selfish”.
Does a reliance on barking orders at robot assistants and the increasing sophistication of AI in bots and so on, mean the beginning of the end for politeness and to the service industry? A topic for further consideration, surely.