Outlook made some helpful suggestions in response – just one example of new AI functionality showing up by dint of subscription services.
Another case in point where software gets better on a regular basis, is a slew of new features that have appeared for some Teams users – namely, the ability to tweak your own camera image. If you don’t see them yet, try checking for updates from the … menu to the left of your own profile image in the main Teams window.
Mirroring your video makes it easier to interact with your own environment while you’re looking at the screen, though it doesn’t affect what others in the meeting see.
Lighting Correction is a one-setting tweak for fixing the contrast and brightness, which can be handy when it’s dark outside and your room lighting isn’t ideal.
Most entertaining though, is the facial retouching feature – YMMV depending on how much your fizzog need retouching in the first place, possibly. You can apply a dab of filler all the way up to full-blown Insta-influencer soft focus, by enabling the feature then moving the slider. Look under Device Settings from the ellipsis (…) menu when you’re in a call.
Check out the Teams blog, and look forward to lots more new features arriving later in the year.
Another tweak in managing your own video comes from users’ feedback, where they don’t want to see their own video window, thinking that it’s distracting when looking at a gallery of other attendees in a meeting.
You’ll be able to hide your own video by clicking on the … in the corner of your own preview, and can then selectively show or hide, or if you’re especially vain, you can pin your own tile to the meeting view so you show up as the same size as everyone else.
Perfect for checking out how the facial buffing has worked out.
If you’ve ever used PowerPoint to present to a group of people, you’ll be familiar with the Slide Show menu to some degree; unless you’re the annoying would-be presenter merely mirroring your primary screen and flicking through their slides without going into the full-screen slide show mode.
When they do it properly, you’ll often see presenters kick off by fishing about with their mouse to click on the little slide-show icon in the toolbar on the bottom. It’s usually quicker to just hit F5 to start, or Shift+F5 to start from the currently-selected slide.
Unfortunately, it’s still pretty common to then see the speaker be surprised because the config of their displays isn’t what they expect – especially the case if they’re sharing their screen on a online meeting, but their laptop is also connected to more than one monitor.
PowerPoint will typically be set up to use Presenter View by default, and the screen that’s being shared will be showing the speaker notes / next slides etc, while the full-screen content is being displayed on the 2nd monitor that isn’t being shared.
To the right of the Monitor setup for presenter view, you may also see an intriguing option that has been added to PowerPoint – automatic subtitling, and translation too. It’s part of the ongoing Office 365 servicing that brings updates on a regular basis.
Choose the language you’d like to display, the location of the subtitles and when you start presenting, the machine will listen to every word you say and will either display what it thinks you’ve said in your own language, or it can use an online service to translate to subtitles in over 60 languages.
There’s an older add-in which achieves much the same thing, if you’re not using O365 – see here for more info. The Presentation Translator addin also allows the audience to follow along and even interact with the presenter using the Microsoft Translator app on their phone.
Windows has a closed captioning setting page that applies to other apps that support it, too, if you’d like to show subtitles on video that has the content already defined.
Closed Captioning is legislated by several countries, for traditionally-broadcast media as well as online video.
You may also want to add captions to videos that you plan to embed – more, here.
Microsoft people love PowerPoint. Even when using it for completely unsuitable purposes (writing reports using PPT instead of Word, OneNote etc – filling slides with very dense and small text) or simply putting too much stuff on a slide, so a presenter has to say “this is an eyechart but…”
There are many resources out there to try to help you make better slides – from how-to videos to sites puffing a mix of obvious things and a few obscure and never-used tricks (eg here or here), and PowerPoint itself is adding technology to try to guide you within the app.
The PowerPoint Designer functionality uses AI technology to suggest better layouts for the content you’ve already put on your slide – drab text, even a few Icons (a library of useful-looking, commonly-used symbols) or graphics from your favourite source of moody pics.
If you don’t see the Design Ideas pane on the right, look for the icon on the Design tab, under, er Designer.
The PowerPoint Designer team has recently announced that one billion slides have been created or massaged using this technology, and they have previewed some other exciting stuff to come – read more here.
A cool Presenter Coach function will soon let you practice your presentation to the machine – presumably there isn’t some poor soul listening in for real – and you’ll get feedback on pace, use of words and so on. Watch the preview. No need to imagine Presenter Coach is sitting in his or her undies either.
When it comes to laying out simple objects on a slide, though, you might not need advanced AI to guide you, rather a gentle helping hand. As well as using the Align functionality that will ensure shapes, boxes, charts etc, are lined up with each other, spread evenly and so on, when you’re dragging or resizing items you might see dotted lines indicating how the object is placed in relation to other shapes or to the slide itself…
In the diagram above, the blue box is now in the middle of the slide, and is as far from the orange box as the gap between the top of the orange box and the top of the grey one. There are lots of subtle clues like this when sizing and placing objects, and it’s even possible to set your own guides up if you’re customising a slide master.
Artificial Intelligence has been dreamt of for decades, where machines will be as smart – or maybe smarter – than humans. AI in popular consciousness is not just a rubbish film, but if you’re a brainless tabloid journalist, then it means Siri and Alexa (assuming you have connectivity, obvs … and hope there’s no Human Stupidity that forgot to renew a certificate or anything), and AI is also about the robots that are coming to kill us all.
Of course, many of us know AI as a term used to refer to a host of related technologies, such as speech and natural language recognition, visual identification and machine learning. For a great example on practical and potentially revolutionary uses of AI, see Dr Chris Bishop’s talk at Future Decoded 2018 – watch day 1 highlights starting from 1:39, or jump to 1:50 for the example of the company using machine learning to make some world-changing medical advances.
Back in the mundane world for most of us, AI technologies are getting more visible and everyday useful – like in OneDrive, where many improvements including various AI investments are starting to show up.
One simple example is image searching – if you upload photos to consumer OneDrive (directly from your phone perhaps), the OneDrive service will now scan images for text that can be recognized… so if you took a photo of a receipt for expenses, OneDrive might be able to find it if you can remember what kind of food it was.
There’s also a neat capability where OneDrive will try to tag your photos automatically – just go into www.onedrive.com and look under Photos, where you’ll see grid of thumbnails of your pictures arranged by date, but also the ability to summarise by album, by place (from the geo-location of your camera phone) or by Tag. You can edit and add your own, but it’s an interesting start to see what the visual search technology has decided your photos are about… not always 100% accurately, admittedly…
More AI goodness is to come to Office 365 and OneDrive users in the near future – automatically transcribing content from videos stored online (using the same technology from the Azure Video Indexer and Microsoft Stream), to real-time PowerPoint captions. Watch this space… and mind the robots.
This week has seen the Microsoft developer conference, called //build/ in its current guise, take place in “Cloud City”, Seattle (not so-called because it rains all the time – in fact, it rains less than in Miami. Yeah, right). Every major tech company has a developer conference, usually a sold-out nerdfest where the (mostly) faithful gather to hear what’s coming down the line, so they know what to go and build themselves.
Apple has its WWDC in California every year (for a long time, in San Francisco), and at its peak was a quasi-religious experience for the faithful. Other similar keynotes sometimes caused deep soul searching and gnashing of teeth.
The Microsoft one used to be the PDC, until the upcoming launch of Windows 8 meant it was time to try to win the hearts & minds of app developers, so //build/ became rooted in California in the hope that the groovy kids would build their apps on Windows and Windows Phone. Now that ship has largely sailed, it’s gone back up to the Pacific North West, with the focus more on other areas.
Moving on from the device-and-app-centric view that prevailed a few years back (whilst announcing a new way of bridging the user experience between multiple platforms of devices), Build has embraced the cloud & intelligent edge vision which cleverly repositions a lot of enabling technologies behind services like Cortana (speech recognition, cognitive/natural language understanding etc) and vision-based products such as Kinect, HoloLens and the mixed reality investments in Windows. AI took centre stage; for a summary of the main event, see here.
The cloud platform in Azure can take data from devices on the edge and process it on their behalf, or using smarter devices, do some of the processing locally, perhaps using machine learning models that have been trained in the cloud but executed at the edge.
With Azure Sphere, there’s a way for developers to build secure and highly functional ways to process data on-board and communicate with devices, so they can concentrate more on what their apps do, and on the data, less on managing the “things” which generate it.
Back in the non-cloud city, Google has adopted a similar developer ra-ra method, with its Google I/O conference also taking place in and around San Francisco, also (like WWDC and Build) formerly at Moscone. It happened this past week, too.
Like everyone else, some major announcements and some knock-em dead demos are reserved for the attendees to get buzzed on, generating plenty of external coverage and crafting an image around how innovative and forward thinking the company is.
Google Duplex, shown this week to gasps from the crowd, looks like a great way of avoiding dealing with ordinary people any more, a point picked up by one writer who called it “selfish”.
Does a reliance on barking orders at robot assistants and the increasing sophistication of AI in bots and so on, mean the beginning of the end for politeness and to the service industry? A topic for further consideration, surely.