Categories
Computing

How to view tweets chronologically on Twitter

As with Facebook’s feed and Instagram’s feeds, Twitter also lets you switch between its own algorithm-run timeline and a timeline that is sorted from most recent to oldest tweets. And so, if you’re not particularly interested in Twitter’s recommended tweets in its Home timeline, you still have the option to view a chronologically sorted timeline instead.

In this guide, we’ll show you how to view tweets in your timeline chronologically either on the desktop website or on the mobile app. You can’t go wrong with either method: They’re both incredibly simple.

How to view the most recent tweets first: on desktop web browser

If you normally scroll through Twitter via a desktop web browser, this is the method for you. Here’s how to view the most recent tweets first on Twitter’s desktop website:

Step 1: Go to Twitter.com and log into your account if you haven’t already.

Step 2: Once logged in, you’ll see your Home timeline. To switch your timeline view to show the most recent tweets first, select the Three star sparkle icon. It’s located in the top right corner of your main timeline.


screenshot

Step 3: From the menu that appears, select the See latest Tweets instead option. This option is also marked with a Double arrows icon.

That’s it. You’re now viewing your Twitter timeline chronologically with the most recent tweets posted at the top.

Desktop website Twitter timeline's latest tweets menu option.

screenshot

How to view the most recent tweets first: On the mobile app

You can also sort your timeline chronologically via the Twitter mobile app. Here’s how to do it:

Step 1: Open the Twitter app on your mobile device.

Step 2: On the main timeline screen, select the Three star sparkle icon at the top of your screen.

Twitter mobile app's Three star sparkle icon.

screenshot

Step 3: From the menu that appears, choose the Switch to latest Tweets option. This menu option is also marked with a Double arrows icon.

And that’s pretty much it. Your timeline should immediately be sorted now to show you the most recent tweets first.

The Twitter mobile app's latest tweets menu option.

screenshot

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

How to view posts chronologically on Instagram

If you’re tired of Instagram’s feed recommendations and would rather just see your friends’ posts sorted chronologically, you can actually do that now. But there are a few caveats to doing so:

  • You can only sort and filter your IG feed on the mobile app. The desktop web version doesn’t offer that functionality at this time.
  • Instagram’s way of sorting your feed to show the most recent posts isn’t a permanent solution. The sorted views are separate feeds that you can navigate to, but neither of them can be set as your default. So you’ll always see Instagram’s main feed (algorithm recommendations and all) first.

That all said, we can still show you how to navigate to these more customizable feeds so you can scroll freely through more of the IG posts you want to see and have them sorted in a way that will make more sense to you. Here’s how to view posts chronologically on Instagram.

How to view IG posts chronologically using the Following feed

The Instagram mobile app will let you view posts in your feed chronologically via one of two views: Following and Favorites. In this section, we’ll show you how to use the Following feed to view the most recent IG posts first.

Note: The Following feed doesn’t permanently sort your IG feed chronologically. It is merely a separate screen within the Instagram mobile app that allows you to view posts chronologically, temporarily. Your main feed with Instagram’s usual algorithm still exists and is still the default view. You can navigate to the Following feed, but you can’t make it the default view.

Step 1: Open the Instagram app on your mobile device.

Step 2: Select Instagram in the top-left corner of your screen.


screenshot

Step 3: From the drop-down menu that appears, choose Following.

You’ll then be taken to the Following feed screen. On this screen, you’ll only see posts from IG accounts that you follow, and these posts will be sorted from the most recent ones at the top with older posts toward the bottom.

To go back to the main feed, select the Back arrow in the top-left corner.

The Instagram Following feed screen's Back Arrow icon.

screenshot

How to view IG posts chronologically using the Favorites feed

You can also view the most recent posts first on IG via the Favorites feed. Setting up this feed takes a few extra steps, but it does allow you to pick specific accounts that you want to see, and it sorts the posts from these accounts chronologically. So instead of seeing chronologically sorted posts from every account you follow, you can use the Favorites feed to only view up to 50 of your favorite accounts.

Remember: The Favorites feed isn’t a permanent fix for Instagram’s main feed recommendations. While posts from your Favorites feed may rank higher in your main feed so you may see them first, the main feed is still your default feed. And the main feed will still be based on IG’s algorithm and is unlikely to be chronological. You’ll need to visit the Favorites feed to view posts from your favorite accounts chronologically.

In this section, we’ll show you how to navigate to and set up your Favorites feed on Instagram.

Step 1: Open the Instagram app on your mobile device.

Step 2: Select Instagram in the top-left corner of your screen. Then select Favorites from the drop-down menu that appears.

The Instagram Favorites feed menu option.

screenshot

Step 3: On the Favorites feed screen, choose Add favorites.

The Instagram Favorites feed's Add Favorites button.

screenshot

Step 4: On the next screen, Instagram will present you with a bunch of suggested favorites that you need to confirm or remove.

If you want to remove one of them, select Remove. If you want to confirm them (add them to your Favorites feed), select the blue Confirm favorites button at the bottom of the screen.

The next time you open the Instagram app on your phone, select Instagram > Favorites. You should see the Favorites feed, which has all of the posts from your confirmed favorite accounts chronologically sorted.

The Instagram Favorites feed's Confirm Favorites screen.

screenshot

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Google Maps’ Live View feature now offers more useful information about restaurants and businesses

Google announced a bunch of new features for Google Maps at its 2021 I/O developer conference today, including upgrades to its handy Live View tool, which helps you navigate the world through augmented reality.

Live View launched in beta in 2019, projecting walking directions through your camera’s viewfinder, and was rolled out to airports, transit stations, and malls earlier this year. Now, Live View will be accessible directly from Google Maps and will collate a lot of handy information, including how busy shops and restaurants are, recent reviews, and any uploaded photos.

It sounds particularly handy for exploring new destinations remotely. You can remotely browse that street full of interesting restaurants while on holiday, checking out which places are heaving, and even looking at some of the pictures of dishes.

Live View also now has better labeling for streets on complex intersections, says Google, and it will automatically orient you in relation to known locations (like your home or work).

That’s not all for Maps, though, and as Google’s Liz Reid said onstage, the company is on track to launch “more than 100 AI-driven improvements” for the app this year.

Other upgrades include a wider launch for the detailed street map view, which will be available in 50 new cities by the end of the year, including Berlin, São Paulo, Seattle, and Singapore; new “busyness” indicators for whole areas as opposed to specific shops and businesses; and selective highlighting of nearby businesses based on the time of day that you’re looking at Google Maps. So if you open up the app first thing in the morning, it’ll show you more places to grab a coffee than a candle-lit dinner for two.

A comparison of Google Maps’ before and after its more detailed street map view.
Image: Google

A more ambitious upgrade for Maps is using AI to identify and forecast “hard-braking events” when you’re in your car. Think of it like traffic warnings that navigation apps issue based on data collated from multiple users. But instead of just traffic jams, Google Maps will try to identify harder-to-define “situations that cause you to slam on the brakes, such as confusing lane changes or freeway exits.” The company says its tech has the ability to “eliminate over 100 million hard-braking events in routes driven with Google Maps each year” by giving users a head’s up when it knows such an event is on the horizon.

Related:



Repost: Original Source and Author Link

Categories
Tech News

Google Deploys Its First All-Electric Street View Car

Google has captured well over 10 million miles of global Street View imagery since its camera-equipped cars first hit the streets 14 years ago.

But despite the emergence of greener vehicle technology, the company has only now gotten around to deploying its first all-electric Street View car.

The vehicle, a Jaguar I-Pace, recently took to the streets of Dublin to take fresh imagery for Google Street View, a popular online service that offers panoramic views of a growing number of places around the world.

But that’s not all that the Jaguar I-Pace is doing, as Google’s first all-electric Street View car has been fitted with a mobile air-sensing platform for gathering data on the city’s air quality over the next 12 months.

Google’s first all-electric Street View car capturing imagery and measuring air quality on the streets of Dublin. Google

Created by environmental intelligence company Aclima, the technology’s sensors can measure and analyze pollutants such as nitrogen dioxide, nitrous oxide, carbon dioxide, carbon monoxide, fine particulate matter, and ozone, which at high levels can harm not only the climate but also human health.

“Dublin City Council is dedicated to fulfilling its commitment to the U.N. BreatheLife campaign, and it is projects like this that leverage innovation and forward thinking to allow us to make informed decisions for the benefit of our city and citizens,” Hazel Chu, the Lord Mayor of Dublin, said in comments reported by the Irish Times. “Environmental air quality is an issue that affects everyone, especially people who live in cities, and I look forward to learning more about how our city lives and breathes.”

Google has actually been doing this kind of work in a number of cities since 2017, although considering the type of car it’s using in Dublin, there will be a little less pollution to detect this time around.

It’s not clear if Google is planning to use more all-electric vehicles in its ongoing Street View project. We’ve reached out to the company to find out and will update this article when we hear back.

In other Street View news, check out this remarkable story about how one man carried a camera for thousands of miles in a monumental effort to bring the southern African nation of Zimbabwe to Google’s mapping tool.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Under the AI hood: A view from RSA Conference

Elevate your enterprise data technology and strategy at Transform 2021.


Artificial intelligence and machine learning are often touted in IT as crucial tools for automated detection, response, and remediation. Enrich your defenses with finely honed prior knowledge, proponents insist, and let the machines drive basic security decisions at scale.

This year’s RSA Conference had an entire track dedicated to security-focused AI, while the virtual show “floor” featured no fewer than 45 vendors hawking some form of AI or machine learning capabilities.

While the profile of AI in security has evolved over the past five years from a dismissible buzzword to a legitimate consideration, many question its efficacy and appropriateness — and even its core definition. This year’s conference may not have settled the debate, but it did highlight the fact that AI, ML, and other deep-learning technologies are making their way deeper into the fabric of mainstream security solutions. RSAC also showcased formal methodology for assessing the veracity and usefulness of AI claims in security products, a capability beleaguered defenders desperately need.

“The mere fact that a company is using AI or machine learning in their product is not a good indicator of the product actually doing something smart,” said Raffael Marty, an expert in the use of AI, data science, and visualization in security. “On the contrary, most companies I have looked at that claim to use AI for some core capabilities are doing it wrong in some way.”

“There are some that stick to the right principles, hire actual data scientists, apply algorithms correctly, and interpret the data correctly,” Marty told VentureBeat. Marty is also an IANS faculty member and author of Applied Security Visualization and The Security Data Lake. “Unfortunately, these companies are still not found very widely.”

In his opening-day keynote, Cisco chair and CEO Chuck Robbins pitched the need for emerging technologies — like AI — to power security approaches capable of quick, scalable threat identification, correlation, and response in blended IT environments. Today these include a growing number of remote users, along with hybrid cloud, fog, and edge computing assets.

“We need to build security practices around what we know is coming in the future,” Robbins said. “That’s foundational to being able to deal with the complexity. It has to be based on real-time insights, and it has to be intelligent, leveraging great technology like AI and machine learning that will allow us to secure and remediate at a scale that we’ve never been able to yet always hoped we could do.”

Use cases: Security AI gets real

RSAC offered examples of practical AI and machine learning information security applications, like those championed by Robbins and other vendor execs.

One eSecurity founder Jess Garcia walked attendees through real-world threat hunting and forensics scenarios powered by machine learning and deep learning. In one case, Garcia and his team normalized 30 days of real data from a Fortune 50 enterprise — some 224,000 events and 24 million files from more than 100 servers — and ran it through a machine learning engine, setting a baseline for normal behavior. The machine learning models built from that data were then injected with malicious event-scheduling log data mimicking the recent SolarWinds attack to see if the machine-taught system could detect the attack with no prior knowledge or known indicators of compromise.

Garcia’s highly technical presentation was notable for its concession that artificial intelligence produced rather disappointing results on the first two passes. But when augmented with human-derived filtering and supporting information about the time of the scheduling events, the malicious activity rose to a detectable level in the model. The lesson, Garcia said, is to understand the emerging technology’s power, as well as its current limitations.

“AI is not a magic button and won’t be anytime soon,” Garcia said. “But it is a powerful weapon in DFIR (digital forensics and incident response). It is real and here to stay.”

For Marty, other promising use cases in AI-powered information security include the use of graph analytics to map out data movement and lineage to expose exfiltration and malicious modifications. “This topic is not well-researched yet, and I am not aware of any company or product that works well yet. It’s a hard problem on many layers, from data collection to deduplication and interpretation,” he said.

Sophos lead data scientist Younghoo Lee demonstrated for RSAC attendees the use of the natural-language Generative Pre-trained Transformer (GPT) to generate a filter that detects machine-generated spam, a clever use case that turns AI into a weapon against itself. Models such as GPT can generate coherent, humanlike text from a small training set (in Lee’s case, fewer than 5,000 messages) and with minimal retraining.

The performance of any machine-driven spam filter improves as the volume of the training data increases. But manually adding to an ML training dataset can be a slow and expensive proposition. For Sophos, the solution was to use two different methods of controlled natural language text generation. This led the GPT model to an increasingly better output that was used to multiply the original dataset by more than 5 times. The tool was essentially teaching itself what spam looked like by creating its own.

Armed with machine-generated messages that replicate both ham (good) and spam (bad) messages, the ML-powered filter proved particularly effective at detecting bogus messages that were, in all probability, created by a machine, Lee said.

“GPT can be trained to detect spam, [but] it can be also retrained to generate novel spam and augment labeled datasets,” Lee said. “GPT’s spam detection performance is improved by the constant battle of text generating and detecting.”

A healthy dose of AI skepticism

Such use cases aren’t enough to recruit everyone in security to AI, however.

In one of RSAC’s most popular panels, famed cryptographers Ron Rivest and Adi Shamir (the R and S in RSA) said machine learning is not ready for prime time in information security.

“Machine learning at the moment is totally untrustworthy,” said Shamir, a professor at the Weizmann Institute in Rehovot, Israel. “We don’t have a good understanding of where the samples come from or what they represent. Some progress is being made, but until we solve the robustness issue, I would be very worried about deploying any kind of big machine-learning system that no one understands and no one knows in which way it might fail.”

“Complexity is the enemy of security,” said Rivest, a professor at MIT in Cambridge, Massachusetts. “The more complicated you make something, the more vulnerable it becomes. And machine learning is nothing but complicated. It violates one of the basic tenets of security.”

Even as an AI evangelist, Marty understands such hesitancy. “I see more cybersecurity companies leveraging machine learning and AI in some way, [but] the question is to what degree?” he said. “It’s gotten too easy for any software engineer to play data scientist. The challenge lies in the fact that the engineer has no idea what just happened within the algorithm.”

Developing an AI litmus test

For enterprise defenders, the academic back and forth on AI adds a layer of confusion to already difficult decisions on security investments. In an effort to counter that uncertainty, the nonprofit research and development organization Mitre Corp. is developing an assessment tool to help buyers evaluate AI and machine learning claims in infosec products.

Mitre’s AI Relevance Competence Cost Score (ARCCS), aims to give defenders an organized way to question vendors about their AI claims, in much the same way they would assess other basic security functionality.

“We want to be able to jump into the dialog with cybersecurity vendors and understand the security and also what’s going on with the AI component as well,” said Anne Townsend, department manager and head of NIST cyber partnerships at Mitre. “Is something really AI-enabled, or is it really just hype?”

ARCCS will provide an evaluation methodology for AI in information security, measuring the relevance, competence, and relative cost of an AI-enabled product. The process will determine how necessary an AI component is to the performance of a product; whether the product is using the right kind of AI and doing it in a responsible way; and whether the added cost of the AI capability is justified for the benefits derived.

“You need to be able to ask vendors the right questions and ask them consistently,” Michael Hadjimichael, principal computer scientist at Mitre, said of the AI framework effort. “Not all AI-enabled claims are the same. By using something like our ARCCS tool, you can start to understand if you got what you paid for and if you’re getting what you need.”

Mitre’s ongoing ARCCS research is still in its early stages, and it’s difficult to say how most products claiming AI enhancements would fare with the assessment. “The tool does not pass or fail products — it evaluates,” Townsend told VentureBeat. “Right now, what we are noticing is there isn’t as much information out there on products as we’d like.”

Officials from vendors such as Hunters, which features advanced machine learning capabilities in its new XDR threat detection and response platform, say reality-check frameworks like ARCCS are sorely needed and stand to benefit both security sellers and buyers.

“In a world where AI and machine learning are liberally used by security vendors to describe their technology, creating an assessment framework for buyers to evaluate the technology and its value is essential,” Hunters CEO and cofounder Uri May told VentureBeat. “Customers should demand that vendors provide clear, easy-to-understand explanations of the results obtained by the algorithm.”

May also urged buyers to understand AI’s limitations and be realistic in assessing appropriate uses of the technology in a security setting. “AI and ML are ready to be used as assistive technologies for automating some security operations tasks and for providing context and information to facilitate decision-making by humans,” May said. “But claims that offer end-to-end automation or massive reduction in human resources are probably exaggerated.”

While a framework like ARCCS represents a significant step for decision-makers, having such an evaluation tool doesn’t mean enterprise adopters should now be expected to understand all the nuances and complexities of a complicated science like AI, Marty stressed.

“The buyer really shouldn’t have to know anything about how the products work. The products should just do what they claim they do and do it well,” Marty said.

Crossing the AI chasm

Every year, RSAC shines a temporary spotlight on emerging trends, like AI in information security. But when the show wraps, security professionals, data scientists, and other advocates are tasked with shepherding the technology to the next level.

Moving forward requires solutions to three key challenges:

Amassing and processing sufficient training data

Every AI use case begins with ingesting, cleaning, normalizing, and processing data to train the models. The more training data available, the smarter the models get and the more effective their actions become. “Any hypothesis we have, we have to test and validate. Without data, that’s hard to do,” Marty said. “We need complex datasets that show user interactions across applications, data, and cloud apps, along with contextual information about the users.”

Of course, data access and the work of harmonizing it can be difficult and expensive. “This kind of data is hard to get, especially with privacy and regulations like GDPR putting more processes around AI research efforts,” Marty said.

Recruiting skilled experts

Leveraging AI in security demands expertise in two complex domains — data science and cybersecurity. Finding, recruiting, and retaining talent in either specialty is difficult enough. The combination borders on unicorn territory. The AI skills shortage exists at all experience levels, from starters to seasoned practitioners. Organizations that hope to be ready to take advantage of the technology over the long haul should focus on diversifying sources of AI talent and building a deep bench of trainable, tech- and security-savvy team members who understand operating systems and applications and can work with data scientists, rather than hunting for just one or two world-class AI superstars.

Making adequate research investments

Ultimately, the fate of AI security hinges on a consistent financial commitment to advancing the science. All major security firms do malware research, “but how many have actual data science teams researching novel approaches?” Marty asked. “Companies typically don’t invest in research that’s not directly related to their products. And if they do, they want to see fairly quick turnarounds.” Smaller companies can sometimes pick up the slack, but their ad hoc approaches often fall short in scalability and broad applicability. “This goes back to the data problem,” Marty said. “You need data from a variety of different environments.”

Making progress on these three important issues rests with both the vendor community, where decisions that determine the roadmap of AI in security are being made, and enterprise user organizations. Even the best AI engines nested in prebuilt solutions won’t be very effective in the hands of security teams that lack the capacity, capability, and resources to use them.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

How to Use Split View on a Mac

When you want to see multiple tabs on one screen without everything getting jumbled, turn to a split screen. Split screens allow you to have two or more sections, each with their own set of information. These sections allow you to work more efficiently and see more information without having a second screen.

Now for the good news: In newer versions of MacOS, there’s a very easy split-screen mode called Split View that anyone with an updated Mac can use. In this guide, we’ll teach you how to use Split View on a Mac to make the most of your system.

While connecting multiple external monitors is always a possibility for larger projects, here’s how to divide your screen on a smaller level whenever you need it.

Get started with Split View

Step 1: Begin by opening two or more windows that you want to be paired in a split-screen layout: Browser windows, apps, documents — whatever you want. Pick your first window, and look in its upper-left corner to find three colored dots: Red, yellow, and green. These control the window.

MacOS Catalina Split View Options

Step 2: If you hover the cursor over the Green Dot, it presents two small “expand” arrows. Hold down on this Green Dot, and a list of options appears: Enter Full Screen, Tile Window to Left of Screen, and Tile Window to Right of Screen. Select either the second or third option, and the window will fill that portion of your display.

MacOS Catalina Split View Windows

Step 3: One half of your Split View is done. You’ll see the first app on one half of the screen, with thumbnails of any other open windows on the other side. Select the other window that you want to use in Split View mode, and it will expand to fill the void, completing the Split View experience. You can tap either window to switch your primary focus as needed.

Adjusting Split View

Split View Divider

Split View doesn’t necessarily need to divide your screen equally. You can click and hold on the Black Divider and slide it left or right to adjust each half of the screen. This is particularly useful if you’re trying to view a large web page with an odd design or need extra space for a big spreadsheet. Just note that some apps — like Apple’s Photos, for example — have minimum widths, so you may not be able to adjust the bar much or even at all.

If you realize you prefer the windows on different sides, simply click and hold an app’s Title Bar and drag it over to the opposite side. The windows will automatically switch places.

Not sure where your menus have gone? Split View automatically hides the menu bar (and Dock). Just move your pointer to the top of the screen, and it reappears while your pointer remains, giving you access to each app’s menus while you’re using Split View.

Finally, if the windows are too small, you can adjust your resolution.

When you’re ready to leave Split View mode, click on the Green Dot on either window or press Esc. This will return both windows to their original state and allow you to resume what you were doing before entering Split View.

Split View options

Split View Zoom Option

If you hold Option (or Alt) and click the Green Button in a window’s top-left corner, you get three new options: Zoom, Move Window to Left Side of Screen, and Move Window to Right Side of Screen.

Whereas Enter Full Screen hides the Dock and menu bar, Zoom keeps these in place. The difference between tiling a window and moving it is similar — tiling hides the Dock and menu bar, while moving does not. Moving also doesn’t enter Split View — there’s no moveable black bar when you just move a window to either side of the screen. You don’t need to pair a second app, either.

Window snapping

MacOS Catalina window snapping

Mac users waited many long years, but MacOS now has native window snapping, just like Windows 10. Click and drag a window to one of the four sides or four corners of your display, and a translucent box will appear in front of it. This indicates the shape the window will occupy. Release the mouse button, and it’ll automatically snap to this position.

MacOS Catalina (and newer) give you a total of 10 different options:

  • Drag a window into a corner, and it’ll take up 25% of your screen.
  • Drag it to the top portion of either the left or right side of the screen, and it’ll fill the top half of the display. Do the same for the bottom portion of the left or right side, and it’ll fill the bottom half of your screen.
  • Drag the window to the left or right of the screen without going near the corner of your display, and it’ll fill the left or right half.
  • Drag it to the bottom of the screen to make it fill the middle third.
  • Drag it to the top of the screen to make the app full screen. Note that if you then drag a little farther up, you’ll enter Mission Control, so you have to be careful with this.

Given there are so many choices, it may take a bit of practice to find the various sweet spots. But adding this functionality to MacOS is a definite boost for Mac users, who have been deprived of this useful function for far too long.

Note that window snapping is not the same as Split View — apps won’t enter full screen when you drag them into place, and there’s no black bar to adjust their size.

A quick word about Mission Control

MacOS Catalina Mission Control

Do you have several windows open at once and want something more comprehensive to view them all? Mission Control can help.

This mode displays all your open windows in a ribbon-like view that lets you quickly jump from one to another. Mission Control also lets you create multiple virtual desktops (or “spaces”), each with their own apps and windows open. These are also displayed on the ribbon, allowing you to easily move from one desktop to another.

You can access Mission Control in many ways, but one of the easiest is to simply drag a window up to the top edge of your screen, which should automatically enter Mission Control mode. Alternatively, Apple keyboards typically include a Mission Control button (F3) or, if you have a trackpad, you can swipe upward with either three or four fingers (depending on your trackpad settings).

You can enter Mission Control while in Split View, too, which is an easy way of switching windows as necessary. Mission Control also helps you switch to Split View when you have two full-screen apps open. Just activate Mission Control, and then drag your app window on top of another window or app icon. This should immediately activate Split View.

Cinch, a third-party alternative

Cinch

If you’re not on board with Split View, there are alternatives for creating a split-screen setup.

Good third-party alternatives can help you customize your split-screen commands for different purposes. One of our favorites to accomplish this customization is Cinch. It creates hot zones on your Mac screen’s four corners and two hot zones on the right and left sides. Drag a window into one of these zones, and it snaps into place. Drag it into a corner, and it will automatically snap to one-quarter of the size of your screen. Drag the window to the side of your Mac, and it will snap to half the screen. Easy.

Some users will love these customized alternatives to this standard split-screen. If you are interested, Cinch does a good job staying current with the latest MacOS.

Cinch offers a free trial version, but the full $7 version won’t exactly break the bank. Check out the app’s official site or look in the App Store to download it.

Some features won’t work on MacOS Mojave

Not all Cinch features are available if you’re working with an older operating system like MacOS Mojave. Some of the newer tools and features, like the window snapping tool, simply aren’t compatible with older systems. You’ll still be able to use different applications with decent results, like Magnet, which functions pretty similarly to Cinch.

Zoom and Catalina’s new Move Window feature only functions on a newer OS, so MacOS Mojave is also unable to use it.

It’s possible to change out your earlier operating system for Big Sur— which is the latest MacOS, but there’s vital information to know first. Take a look at our guide on installing MacOS Big Sur.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Sitecore acquires Boxever and Four51 to give marketers a better view of every customer

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Sitecore, a provider of digital experience management software, this week announced it has entered into definitive agreements to acquire Boxever, whose customer data platform (CDP) is accessed via the cloud, and Four51, which offers an ecommerce platform invoked using application programming interfaces (APIs).

Terms of the acquisitions were not disclosed, but they are both being funded by a $1.2 billion investment strategy the company revealed last month.

Sitecore is best known for providing an integrated content and digital asset management platform infused with AI capabilities that enterprises employ to personalize engagements with customers online. The two acquisitions will extend the scope of the Sitecore portfolio to include complementary ecommerce and CDP platforms at a time when organizations are investing heavily in digital business transformation initiatives, Sitecore CEO Steve Tzikakis said.

Most of those initiatives revolve around corporate websites organizations are now employing to engage with their end customers more directly, Tzikakis said. The Sitecore offerings are designed around a modern microservices-based architecture that makes it simpler for enterprise IT teams to mix and match headless services residing on the Sitecore backend platform, he noted.

Rather than requiring IT teams to integrate an array of backend services, the goal is to provide a modular set of backend services that share a common set of APIs, Tzikakis said. The CDP from Boxever is a software-as-a-service (SaaS) platform designed from the ground up to be integrated with web applications, he added. In contrast, rival providers of CDPs — such as Salesforce and SAP — are making available CDPs that are extensions of customer relationship management (CRM) and enterprise resource planning (ERP) applications, respectively, but are not integrated with the websites driving digital engagements with customers, Tzikakis said.

Sitecore currently has about 4,000 customers — spanning a wide range of verticals — 1,000 of them Fortune 1000 organizations. While companies that sell directly to consumers have historically invested in websites and associated applications, Tzikakis said in the wake of the COVID-19 pandemic many organizations that sold business-to-business (B2B) are now also trying to establish online relationships with downstream end customers.

Instead of having to invest in AI capabilities on their own, many of those organizations are leveraging the investments Sitecore made to surface the right content at the right time for any given customer, Tzikakis said. For all the hype surrounding digital business transformation these days, he noted most organizations still struggle to provide customers with an online experience that is specifically tailored to their interests, no matter how many times someone has visited their website. This is largely because organizations are unable to surface that content at the right time, Tzikakis added.

Naturally, Sitecore views Boxever and Four51 as linchpins in an evolving strategy for addressing digital business transformation initiatives that ideally should start with the customer rather than as a backend process. As more customers rely on the internet to purchase goods and services, the need for organizations to create more compelling content that enables them to better differentiate themselves will only increase.

The challenge is that many organizations are still operating in the Web 2.0 era. End customers, however, now expect an online multimedia experience that piques their interest without unnecessarily compromising their privacy. They may one day opt in to establish a deeper relationship, but until they do, the onus for inferring what content to surface when, based on all the available metrics, is on the vendor.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

The DeanBeat: Our 360-degree view of the metaverse ecosystem

I’m very proud that we were able to create an original event around the metaverse this week and that it received good attention in spite of crazy news like GameStop‘s stock frenzy and Apple’s reaffirmation of its stance on privacy over targeted advertising.

Our GamesBeat Summit: Into the Metaverse and GamesBeat/Facebook: Driving Game Growth event drew more than 3,400 registered guests (not counting those who viewed livestreams), building a community around the metaverse and mobile games that we didn’t really know was there. We had record engagement with this metaverse event, and that confirms for me that we’re all fed up with the Zoomverse. Thank you for support, as you affirmed that I’m not alone in this passion. While we may not be able to define the metaverse yet, it feels like we all agree something else should connect us amid the bleak reality of the pandemic.

When we started work on our metaverse conference last year, we weren’t sure if we would have half a day of talks. We wound up with 29 talks and 68 speakers (not counting the 13 sessions and 33 speakers from our day one GamesBeat/Facebook: Driving Game Growth event). What we wound up with was a 360-degree view of the metaverse ecosystem. That will help us figure out what it is.

As I searched for speakers in this space, I was pleased to find so much more effort going into it this across multiple industries related to gaming and entertainment. And some of the people thinking about this have been contemplating it for decades. Ian Livingstone, cofounder of Hiro Capital, has been thinking about this for so long that he named his venture firm after the lead character in Snow Crash, Hiro Protagonist. And if you could somehow find out the secret research and development funds of the largest companies in technology, games, entertainment, and other industries, you would find billions upon billions of dollars being invested in the metaverse.

While assembling this event, the only session I wasn’t able to schedule in time was Brands and the Metaverse. But we can do that one in the future, as brands need some time to catch up to the idea. Yet I’m glad about what we got done and that our event didn’t present a monolithic perspective of the metaverse, which is rapidly moving from a hypothetical sci-fi dream to something real.

Just what is the metaverse?

We found that it is hard to define the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.

When I say the word, I mean an online universe that connects worlds, each of which is a place where you are enticed to live your life. Matthew Ball says it will give us virtually infinite supplies of persistent content, live content, on-demand playback, interactivity, dynamic and personalized ads, and content distribution. It won’t have a cap to how many people can be in it at a time. It will have a functioning economy. You’ll watch movies with friends by your side. You’ll play games. You’ll shop, socialize, and come back every day.

Vicki Dobbs Beck of ILMxLab says the metaverse brings together all of the different arts of humanity in one place. During the conference, I discovered that no one had the same definition of the metaverse. I discovered that in the town hall Q&A, when Hironao Kunimitsu of Gumi said it would take a decade to create the metaverse, while Jesse Schell of Schell Games, Tim Sweeney of Epic Games, and Cyberpunk creator Mike Pondsmith of R. Talsorian Games said that we have a version of the metaverse that already exists.

But nobody disputed how important the metaverse will be. I see it as the ultimate revenge of the nerds, the people who dreamed of making the Star Trek Holodeck or the main street of the Metaverse in Snow Crash. How big a deal is this?

“It’s like being at the beginning of motion pictures, or television, or the web,” Schell said.

The open or walled garden metaverse

Epic’s Sweeney stood up for the open metaverse, saying it should be democratically controlled by everybody, not just the tech giants with their walled gardens. That would only hold up innovation, result in big “taxes” for consumers, and put the rights of users at the bottom of the list. Those are bold words for someone who is suing Apple for antitrust violations. Jason Rubin, vice president of play at Facebook, didn’t repeat the same words about Apple. But he said Facebook’s goal is to knock down barriers that limit access to games. If Facebook succeeds with its Instant Games and cloud games, it could find a way to bypass the app stores, just as Sweeney is trying to do.

These are sentiments that make me feel that Sweeney is far from being alone. Ryan Gill and Toby Tremayne of Crucible proposed blockchain and the open web as a way to create agents that will represent us in the metaverse. Perhaps the open web could give developers a way around the app stores, or maybe Unit 2 Games’ Crayta technology will enable developers to build their dreams in the cloud.

Meanwhile, CCP Games CEO Hilmar Petursson reminded us that Eve Online, created back in 2003, already exists as a metaverse for 300,000 souls who are dedicated to it. While his audience is small by Roblox standards (Dave Baszucki’s company has 36 million daily users), it is bigger than Petursson’s native Iceland and he has 17 years of learning about the metaverse, or an online haven where people are willing to live their lives for decades.

The creative leaders of the industry voiced their support for the metaverse as a new place to tell stories. Vicki Dobbs Beck of ILMxLab and Siobhan Reddy of Media Molecule talked about “storyliving,” or creating new experiences that emerge as we live our lives online. Those experiences could blend both emergent and narrated experiences, they said, while Baszucki at Roblox is putting his faith in user-generated games, which might be the only way to populate a metaverse with enough things to do.

Jacob Navok, CEO of Genvid Technologies, described the interactive reality show of Rival Peak, which has become a hit on YouTube and Facebook where influencers can summarize the week’s events in a Survivor-like competition between AI characters. The “cloud-native” games like Rival Peak are the kind of modern entertainment that could be enabled by the metaverse.

Mark Long, CEO of Neon Media, is itching to get a big transmedia project underway that shows that entertainment properties can span different types of media and live in a kind of metaverse. And Hironao Kunimitsu of Gumi, whose company is working on a massively multiplayer online virtual reality game, told us how the metaverse isn’t just an inspiration from Western science fiction. He got his inspiration from the Japanese anime series Sword Art Online.

We had alternative views of the metaverse, as expressed by Akash Nigam of Genies and Edward Saatchi of Fable Studio. They argued that existing smaller 2D efforts — like Instagram AI characters — might be the earliest and most accessible manifestation of the metaverse. And John Hanke of Niantic and Ronan Dunne of Verizon talked about an augmented reality alliance that will take the metaverse on the go.

The chicken and egg problem

I saw from the presentations that we have a huge chicken and egg problem. Dunne mentioned that Verizon has committed $60 billion to the 5G network on the assumption that people will use it. The infrastructure gets built because we have a long history of proving such capital investments as reasonable investments.

But who’s going to write the first check for the metaverse? Sweeney doesn’t have that much money, despite the riches of Fortnite. Apple could write that check. Maybe Google or Facebook or Amazon. But then we know that Sweeney’s dream of openness would probably go up in smoke.

Does the metaverse require us to lay down that kind of money for the infrastructure? Probably. We’re talking about the next version of the internet that would support shifting most of the world’s population into online living. It’s the place where gamers will be able to play every game they’ve ever wanted. If we’re not that ambitious, then I wouldn’t really say it is the metaverse we are trying to create. The tech giants have that kind of money, but Sweeney isn’t so sure we should trust them.

When you think about the scale of the problem, it’s a big one. Fortnite has amassed more than 350 million users, and that gives Sweeney a big advantage. He hopes to evolve Fortnite over time into the metaverse. But Fortnite’s world is built for 100 players to fight each other in a single shard at a time. That’s the battle royale experience. But a metaverse needs 100 million people to be in a shard at once. Can Epic bolt on this capability to a game that was never designed to accomplish that?

Dean Abramson and Sean Mann of RP1 pitched their “shardless” world architecture that they hope will allow them to cram 100 million people into a single world without melting the polar ice caps. He started working on the problem a decade ago, after designing the concurrent architecture of Full Tilt Poker. But, again, we have the chicken and egg problem. Who will adopt this technology, which hasn’t been proven yet, and toss out the architecture that has a lot of legacy users?

Fortunately, we have more resources dedicated to this purpose now than ever before. We’re at a rare moment when financial, cultural, entertainment, and human interests are aligned. Gaming had historic growth in 2020. The odds are good it will keep growing. Mobile insights and analytics firm App Annie estimates mobile games, already the biggest segment, will grow 20% to $120 billion in 2021. This growth means that game companies will have the cash to invest in the metaverse. Roblox, which is banking on user-generated content for the metaverse, raised $520 million and will still go public soon. That gives Roblox a big war chest to build the metaverse that founder Dave Baszucki wants to see.

The exchange rates and toll bridges between worlds

Roblox and Minecraft have hit critical mass as well. But if we were simply to connect those worlds, it would be hell to figure out the exchange rate between the currencies of the games. How do you translate the most valuable gun in Fortnite to something in Roblox?

Together Labs, creator of IMVU, has figured out a way to get users paid for the things they create with VCoin, which could be transferred between worlds or cashed out in fiat currency such as U.S. dollars. Folks like John Burris of Together Labs, John Linden at Mythical Games, and Arthur Madrid of The Sandbox are figuring out how to best do the payments and economies of the metaverse, using blockchain. Like Petursson, they might be creating products for the technogeeks of the world, a very small audience that could make them vastly profitable. They are learning so much, but they have to figure out how to make cryptocurrency and blockchain relevant to the masses.

One of the possible misconceptions of the metaverse, as depicted in Steven Spielberg’s Ready Player One movie, is that we will want to take a single avatar and move from one world to another world instantly. Petursson warned that you can wreck the economies of the games if you cause the flow of labor to switch from one game to another. That is, if you can mine something cheaply in Minecraft, and then convert it to something valuable in Roblox, where it’s more expensive to obtain the same resource, then all of the labor force in Roblox will shift to Minecraft and then take their resources back to Roblox. Why would different companies allow that to happen to their worlds? That’s one reason that Frederic Descamps’ Manticore Games lets users create multiple worlds — all within the same company — where the users can instantly teleport from one experience to another.

If we don’t make worlds interoperable, then we’ll have the Tower of Babel, no different from today’s game industry. To make game worlds interoperable, Sweeney noted that we would have to figure out a new programming model, built on something like Javascript that can cross over different platforms. And yes, that means that code created by someone at Nintendo would have to run in Sony’s game world. That way, you could take your avatar and your guns and shoot at characters in another world. We have the tech to do that through cross-platform tech — from blockchain to Javascript — but how would fast would it run? Who would debug it?

Sweeney said that if all companies could look beyond their own interests — and consider enlightened self-interest — they could create a metaverse for the greater good that could lead to such a large economic boom that all companies will benefit.

Common standards are necessary, but perhaps governments would likely get into the picture as well, said futurist Cathy Hackl. After all, the U.S. government probably wouldn’t want the metaverse to originate in China, or visa versa.

Living in science fiction

Jensen Huang of Nvidia holds the world's largest graphics card.

Above: Jensen Huang of Nvidia holds the world’s largest graphics card.

Image Credit: Nvidia

I don’t want to suggest that this is too hard a problem for society to solve, or that we have no hope of creating an ambitious metaverse. As Nvidia CEO Jensen Huang, who is also a hardcore engineer, said in an interview with me that “we’re living in science fiction now.” He believes the metaverse will come about soon because we have made so much progress in other technologies such as graphics and AI.

Remember how we used to talk about AI as a fantasy. Then, around eight years ago, it started working better with the advent of deep learning neural networks. Now we have 7,000 AI startups around the world that have raised billions of dollars. 76% of enterprises are now prioritizing AI in their 2021 IT budgets. Now technologies such as Open AI’s GPT-3 technology could enable some really smart AI characters, which means we could populate the metaverse with non-player characters, or NPCs, who are so intelligent we could talk to them for hours and never figure out that they are not really human beings. With Huang’s chips driving these advances, we’re on a path of accelerated innovation, and the advances in AI will help advance the creation of other inventions, such as the metaverse. It’s a snowball effect.

If, as Raph Koster, cofounder of Playable Worlds, says, “the metaverse is inevitable,” then we should think about some of the downsides.

We also have to think about the consequences of the metaverse. It would be wonderful if it became our digital Disneyland. But if it succeeds in creating artificial companions for all of us, we might lose interest in the real world and it might because the most addictive drug ever made. These are all problems that we have precedent for, as the science fiction writers like Neal Stephenson, William Gibson, and Ernest Cline have all taught us with their dystopian visions of the metaverse.

It would be a shame to create paradise and leave out a lot of people. That’s why Stanley Pierre-Louis, CEO of the Entertainment Software Association, reminded us that we should make the worlds diverse from the get-go while using the most diverse creative teams we can find. Otherwise, the metaverse creators could shoot themselves in the feet, creating an elitist colony instead of something that is accessible to everyone in the world with a smartphone. It is critical we get the metaverse right from the start, Pierre-Louis said, echoing Sweeney’s words even though they were both talking about different things.

Our NPCs, ourselves

Richard Bartle, the game scholar at the University of Essex, offered some useful cautionary notes about creating the metaverse in the closing talk of our event. If we create artificial beings with sapience, or real intelligence and free will, then we have to consider whether we should treat them as our slaves, as we do in modern-day video games. If we think of these virtual beings as our property, then it’s OK if we turn the switch off on them or slaughter them for sport. But if we come to think of them as real people, or companions, then developers face a real ethical dilemma over how much power they should give us over these artificial people.

Rony Abovitz, the former CEO of Magic Leap, announced his new startup, Sun and Thunder, at our event. Over the next 15 years, he wants to create synthetic beings, which are AI-driven. But rather than just create them, Abovitz wants to imbue them with intelligence so that they can then help his company develop more synthetic beings. Sounds crazy, but it just might make sense in our world of accelerated change.

Bartle wants us to remember that the metaverse should be the place that he has always dreamed of. It should be the place where we are free from the social pressures of society, where we are free from the limitations of our own “roll of the dice,” or our individual heritage. The metaverse should be the place where we can be the people we can’t be in real life. Where we can be our best selves. And rather than be a vendor who extracts the biggest toll on the metaverse, developers should help us reach that goal of being our best selves, Bartle said.

Beyond the Zoomverse

I’ll reiterate why the metaverse is so important. We need this to happen now because we are stuck in the Zoomverse. The coronavirus has tainted our world, and the digital world provides the escape. For our mental well being, we need something better than video calls. Something more human. Something that brings us together. Just as the world needed a vaccine and science brought it to us, our social needs are so great that we have to build something like the metaverse and, like the proliferation of smartphones through the world, bring it to everybody.

It will be a long road, and that is why we need inspiration. We need people to paint a vision for what life could be like. The science fiction writers laid the groundwork. Now it’s up to the game industry, and whatever other industries want to come along for the ride, to build it and make it fun. And as I said in my opening speech, I hope one day our GamesBeat community can hold an event inside the real metaverse.

Once again, I am so glad to see our speakers point out the momentous decisions we face around the building of the metaverse, which could either be humanity’s cage or its digital salvation. I appreciate the time you gave us, as apparently you might have all been better off making millions of dollars by buying GameStop stock instead.

I’ll weigh in next week with a roundup of our GamesBeat/Facebook: Driving Game Growth event. But first, I need to sleep.


Watch on-demand: GamesBeat’s Driving Game Growth & Into the Metaverse


Repost: Original Source and Author Link