Research from a couple of years back showed that the most-searched-for term on Bing.com was “google”. While it seems crazy that people would type the name of a search engine into the search box of another, it’s possible they were entering “google” into a box on their homepage or even in the browser address bar, and that term was sent to bing.com as a query, rather than sending the browser to google.com.
If you’re using Edge and have Bing as the default search experience – other search engines are available – then you may see the prominent search box in your new tab page, but it’s worth remembering that the address bar at the top of the browser is also a search box. You can jump to the address bar in Edge or Chrome by pressing ALT+D, which also selects the current site’s URL (if there is one) so you can edit it or just replace by typing something else.
If you start putting the name of a site into the address bar, you’ll be offered autocomplete suggestions from your favourites and your previous browsing history, so it may be very straightforward to jump to not just the website but a specific and previously accessed page within.
Entering a site name and pressing CTRL+ENTER will add the https://www. and .com bits so you don’t need to; therefore, to go to the BBC website, you could press ALT+D bbc CTRL+ENTER and you’d go there directly.
Although the address bar will ultimately use your default search engine to query a word or phrase that doesn’t appear to be a web site address, you can force it by starting to type ? in the address bar, then enter your search term after the question mark.
Some sites will allow the browser to search within them by adding the site name and then pressing TAB. Whatever text you enter after the TAB will be sent to the specific search page of that site. Not all sites support this method, but many common ones do, like Twitter, Amazon, YouTube and more.
Go to the search engine settings in Edge (or jump to the address bar and enter edge://settings/searchEngines) to see which sites are set up already. You can add your own “search engine”, which means you can direct Edge how to search within that site.
Click Add to include one of your own, using the appropriate site URL while replacing the bit where the search term is specified with %s – eg searching the OneDrive photos section for “dogs” would give a URL of https://photos.onedrive.com/search?q=dogs.
Give the Search Engine a shortcut name you want to use and then paste the modified URL and hit save. Now, in this example, typing photos | TAB | cats | ENTER would seach OneDrive for cat pictures.
If you are a Microsoft 365 user then you might be able – if it’s been enabled for your tenant – to search internal work documents and Sharepoint sites, just by typing work | TAB | etc. It’s on by default, but admins could also give you custom keywords / shortcut words too.
Finally, on the topic of Searching in the browser, it’s possible to search across all the tabs you have open; start typing something in the address bar and you’ll see the option of filtering that search to apply to Work, history, favourites or tabs.
To quickly jump to that tab, use the up and down keys to select the one you want, and press Enter.
Continual advances in the quality of smartphone cameras mean that most people don’t use a physical camera any more; unless you are really demanding when it comes to control over digital imagery, phone cameras are good enough for most people, most of the time.
Compact cameras have evolved too, providing phone-beating snaps through better sensors and lenses than could possibly fit in the body of a handheld communicator. More light hitting a larger sensor through a bigger, higher quality lens gives you a better starting position to get a decent picture, though smartphones have powerful software and – increasingly – cloud services available to help improve the photo after it’s been captured. Higher-end cameras are changing, too – even Hasselblad (famed for moon shots but also for the most famous photo of the world) is ditching the DSLR model and going mirrorless. The horror!
Google Lens is available on iPhone and iPad too, and depending on the Camera app you use on Android, it might also be launched from there (and most Android devices will launch the Camera app if you double-tap on the power button, so it’s a quick way of getting to Camera, even if the device is locked).
Microsoft Lens is one of the best “Lens” or scanning apps in either mobile store (Fruity | Googly). Formerly “Office Lens”, at one point also available as a Windows app (but now discontinued) and since rebranded somewhat by its listing in the mobile app stores as Microsoft Lens: PDF Scanner, though it can do lots more.
The premise of Microsoft Lens is that you can point the camera at something and scan it, by taking a high-resolution photo of the thing and then using the software to manipulate, crop and adjust the image. The most obvious use case is scanning a Document; start the Lens app, lay the doc out as clearly as you can and then step through grabbing each page in turn.
The red > icon in the lower right shows how many pages have been captured so far. In earlier versions of the Lens app, you’d try to frame the page at the point of capture but now you just grab the images one-by-one (using the big white button) and do the tidying up later.
Press that red button and you’ll go to the UI where Lens tries to identify the corners of each page, and lets you tweak them by dragging the points. You could retake that individual image or delete it from the set of captures.
Press the confirm button on the lower right and you’ll jump to a review of the captured images, giving the option of rotating or adjusting each one, cropping, applying filters to brighten and sharpen them and so on. Once you’re happy that you have the best-looking images, tap on Done to save your work.
You could send all the pictures into a Word or PowerPoint doc, drop them all into OneNote or OneDrive as individual files, or combine all the “pages” into a single PDF and save to your device or to OneDrive.
There are other tools on the primary screen of the Lens app, too, if you swipe left to right. The Whiteboard feature lets you grab the contents off the wall and applies a filter to try to flatten the image and make the colours more vibrant.
There’s a Business Card scanner which will use OCR to recognize the text and will drop the image of the card and a standard .VCF contact attachment into OneNote, ready to be added to Outlook or other contact management tool.
The Actions option on the home screen gives access to a set of tools for capturing text and copying it to other applications or reading it out. There’s also a QR code and barcode scanner too.
Start the Lens app, and instead of using the camera to grab the contents and then faff around trimming them, tap the small icon in the bottom left to pick images from your camera roll. This way, you could just snap the slides quickly using the normal camera app and do the assembling and tweaking inside the Lens app, later.
This photo was taken on a 4-year-old Android phone, 3 rows back from the stage at an event using the Camera app with no tweaks or adjustments. It was then opened in Lens, which automatically detected the borders of the screen and extracted just that part of the image into a single, flat picture.
Not Aye-Aye, Ally-Ally, nor Why-Eye, but Ay-Eye, as in A.I. And not the cheesy Spielberg flick. The tech news has been all about artificial intelligence recently, whether it’s ChatGPT writing essays or giving witty responses, to Microsoft committing another chunk of change to its developer, OpenAI.
Original backers of OpenAI include Tony Stark (who has since resigned from the board in order to discombobulate the world in other ways) and AWS, though Amazon has warned employees not to accidentally leak company secrets to ChatGPT and its CTO has been less than enthused.
ChatGPT is just one application – a conversational chatbot – using the underlying language technology that is GPT-3, developed by the OpenAI organization and first released over 2 years ago. It parses language and using previously analyzed data sets, gives plausible-sounding responses.
Further evolutions could be tuned for particular tasks, like generating code – as already available in PowerApps (using GPT-3 to help build formulae) or GitHub CodePilot (which uses other OpenAI technology that extends GPT-3). Maybe other variants could be used for interviews or auto-generating clickbait news articles and blog posts.
You’ll need to join the waitlist initially but this could ultimately be a transformational search technology. Google responded quickly by announcing Bard, though Googling “Google Bard” will tell you how one simple mistake hit the share price. No technology leader lasts forever, unless things coalesce to there being only one.
Other AI models are available, such as OpenAI alternative, Cohere, and there are plenty of sites out there touting AI based services (even if they’re repainting an existing thing to have .ai at the end of it). For some mind-blowing inspiration including AI-generated, royalty-free music or stock photos, see this list.
The last couple of decades have seen a revolution in user apps which offer location awareness and guidance. Automotive sat-navs were available some years ago, dating back to Honda’s electro Gyro-cator (now that’s a name) in 1986. CD and HDD based satnavs in cars became available over the years since, but typically were many thousands of dollars/pounds/etc as an option.
Google Earth was first launched in 2001 as a desktop app, and Google Maps followed in the browser, a few years later. Microsoft launched “Virtual Earth” shortly after that, though it was initially more like “Virtual North America” as its global coverage was very lacking. Over time, Bing Maps launched a bunch of innovative services, like Birds Eye, which used licensed 3rd party images from spotter planes to stitch together a “45 degree” view rather than the typical straight-overhead aerial view.
The source data for Birds Eye is a little out of date in some areas – though is still being updated in, er, North America (eg. see here and here), and maybe in other areas over time too. Point Birds Eye at Microsoft’s UK campus, and it shows Building 5 under construction, so the images are at least 8 years old, though since they no dates other than “© 2020”, there’s no obvious way to tell.
Google’s Street View shows the dates of images if there are multiple – click the down arrow next to “Street View” in the top left to view the history.
as well as rowing back some of the nagging to get Edge browser users to move to Chrome, Google released Google Earth in the browser – it’s maybe not quite so smooth as the desktop app, but it’s quick to use – … see Microsoft UK’s TVP campus, here.
The Washington Post reports that Google changes the view of maps depending on the country the user is in, removing disputed borders and the likes – so it’s a complicated world. According to that same article, Bing Maps is a very minor player in map usage, with Apple Maps (after an inauspicious start) has grown to be the second-most-used mapping platform, due to mobile usage, either on the Maps app directly or via other 3rd party apps which use location-awareness from the mobile device.
Bing Maps is used in many online services and other apps, however – like Microsoft’s forthcoming reboot of Flight Simulator, which supposedly features every airport in the world and uses data from Bing Maps, real-time weather reports and rendering in Azure, to provide a realistic flying view. There are some amazing videos on the Flight Simulator channel.
This week has seen the Microsoft developer conference, called //build/ in its current guise, take place in “Cloud City”, Seattle (not so-called because it rains all the time – in fact, it rains less than in Miami. Yeah, right). Every major tech company has a developer conference, usually a sold-out nerdfest where the (mostly) faithful gather to hear what’s coming down the line, so they know what to go and build themselves.
Apple has its WWDC in California every year (for a long time, in San Francisco), and at its peak was a quasi-religious experience for the faithful. Other similar keynotes sometimes caused deep soul searching and gnashing of teeth.
The Microsoft one used to be the PDC, until the upcoming launch of Windows 8 meant it was time to try to win the hearts & minds of app developers, so //build/ became rooted in California in the hope that the groovy kids would build their apps on Windows and Windows Phone. Now that ship has largely sailed, it’s gone back up to the Pacific North West, with the focus more on other areas.
Moving on from the device-and-app-centric view that prevailed a few years back (whilst announcing a new way of bridging the user experience between multiple platforms of devices), Build has embraced the cloud & intelligent edge vision which cleverly repositions a lot of enabling technologies behind services like Cortana (speech recognition, cognitive/natural language understanding etc) and vision-based products such as Kinect, HoloLens and the mixed reality investments in Windows. AI took centre stage; for a summary of the main event, see here.
The cloud platform in Azure can take data from devices on the edge and process it on their behalf, or using smarter devices, do some of the processing locally, perhaps using machine learning models that have been trained in the cloud but executed at the edge.
With Azure Sphere, there’s a way for developers to build secure and highly functional ways to process data on-board and communicate with devices, so they can concentrate more on what their apps do, and on the data, less on managing the “things” which generate it.
Back in the non-cloud city, Google has adopted a similar developer ra-ra method, with its Google I/O conference also taking place in and around San Francisco, also (like WWDC and Build) formerly at Moscone. It happened this past week, too.
Like everyone else, some major announcements and some knock-em dead demos are reserved for the attendees to get buzzed on, generating plenty of external coverage and crafting an image around how innovative and forward thinking the company is.
Google Duplex, shown this week to gasps from the crowd, looks like a great way of avoiding dealing with ordinary people any more, a point picked up by one writer who called it “selfish”.
Does a reliance on barking orders at robot assistants and the increasing sophistication of AI in bots and so on, mean the beginning of the end for politeness and to the service industry? A topic for further consideration, surely.