Several months ago, saying that the “cure” that facial recognition offers is worse than the ills it solves would have seemed hyperbolic. But now, the metaphor has become all too literal — and the medicine it promises isn’t quite so easy to reject when sickness is sweeping the globe.
Even as it depresses economies across the world, the coronavirus pandemic has sparked a new period of growth and development for facial recognition technology. Creators pitch their tools as a means to identify sick individuals without risking close-contact investigation.
In China, the biometrics company Telpo has launched non-contact body temperature measurement terminals that — they claim — can identify users even if they wear a face mask. Telpo is near-evangelical about how useful its technology could be during the coronavirus crisis, writing that “this technology can not only reduce the risk of cross infection but also improve traffic efficiency by more than 10 times […] It is suitable for government, customs, airports, railway stations, enterprises, schools, communities, and other crowded public places.”
COVID-19: A Push Towards Dystopia?
At a surface glance, Telpo’s offerings seem…good. Of course we want to limit the spread of infection across public spaces; of course we want to protect our health workers by using contactless diagnostic tools. Wouldn’t we be remiss if we didn’t at least consider the opportunity?
And this is the heart of the problem. The marketing pitch is tempting in these anxious, fearful times. But in practice, using facial recognition to track the coronavirus can be downright terrifying. Take Russia as an example — according to reports from BBC, city officials in Moscow have begun leveraging the city’s massive network of cameras to keep track of residents during the pandemic lockdown.
In desperate times like these, the knee-jerk suspicion that we typically hold towards invasive technology wavers. We think that maybe, just this once, it might be okay to accept facial recognition surveillance — provided, of course, that we can slam the door on it when the world returns to normal. But can we? Once we open Pandora’s box, can we force it shut again?
In March, the New York Times reported that the White House had opened talks with major tech companies, including Facebook and Google, to assess whether using aggregated location data sourced from our mobile phones would facilitate better containment of the virus. Several lawmakers immediately pushed back on the idea; however, the discussion does force us to wonder — would we turn to more desperate measures, like facial surveillance? How much privacy would we sacrifice in exchange for better perceived control over the pandemic?
Understanding America’s Surveillance Culture Risk
I’ve been thinking about this idea ever since January, when an expose published by the New York Times revealed that a startup called Clearview AI had quietly developed a facial recognition app capable of matching unknown subjects to their online images and profiles — and promptly peddled it to over 600 law enforcement agencies without any public scrutiny or oversight. Clearview stands as a precursor; a budding example of what surveillance culture in America could look like, if left unregulated. One quote in particular sticks in my head.
“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” David Scalzo, the founder of a private equity firm currently investing in Clearview commented for the Times. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”
Scalzo’s offhand, almost dismissive tone strikes an odd, chilling contrast to the gravity of his statement. If facial recognition technology will lead to a surveillance-state dystopia, shouldn’t we at least try to slow its forward momentum? Shouldn’t we at least consider the dangers that a dystopia might pose — especially during times like these, when privacy-eroding technology feels like a viable weapon against global pandemic?
I’m not the only one to ask these questions. Since January’s expose, Clearview AI has come under fire from no fewer than four lawsuits. The first castigated the company’s app for being an “insidious encroachment” on civil liberties; the second took aim both at Clearview’s tool and the IT products provider CDW for its licensing of the app for law enforcement use, alleging that “The [Chicago Police Department] […] gave approximately 30 [Crime Prevention and Information Center] officials full access to Clearview’s technology, effectively unleashing this vast, Orwellian surveillance tool on the citizens of Illinois.” The company was also recently sued in Virginia and Vermont.
All that said, it is worth noting that dozens of police departments across the country already use products with facial recognition capabilities. One report on the United States’ facial recognition market found that the industry is expected to grow from $3.2 billion in 2019 to $7.0 billion by 2024. The Washington Post further reports that the FBI alone has conducted over 390,000 facial-recognition searches across federal and local databases since 2011.
Unlike DNA evidence, facial recognition technology is usually relatively cheap and quick to use, lending itself easily to everyday use. It stands to reason that if better technology is made available, usage by public agencies will become even more commonplace. We need to keep this slippery slope in mind. During a pandemic, we might welcome tools that allow us to track and slow the spread of disease and overlook the dangerous precedent they set in the long-term.
Given all of this, it seems that we should, at the very least, avoid panic-prompted decisions to allow facial recognition — and instead, consider what we can do to avoid the slippery slope that facial recognition technology poses.
Are Bans Protection? Considering San Francisco
In the spring of 2019, San Francisco passed legislation that outright forbade government agencies from using tools capable of facial surveillance — although the ruling was amended to allow for equipped devices if there was no viable alternative. The lawmakers behind the new ordinance stated their reasoning clearly, writing that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits.”
They have a point. Facial recognition software is notorious for its inaccuracy. One new federal study also found that people of color, women, older subjects, and children faced higher misidentification rates than white men.
“One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse,” Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU), told the Washington Post. “But the technology’s flaws are only one concern. Face recognition technology — accurate or not — can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”
While it’s still too early to have a clear gauge on the ban’s efficacy, it is worth noting that the new legislation sparked a few significant and immediate changes to the city’s police department. In December, Wired reported that “When the surveillance law and facial recognition ban were proposed in late January, San Francisco police officials told Ars Technica that the department stopped testing facial recognition in 2017. The department didn’t publicly mention that it had contracted with DataWorks that same year to maintain a mug shot database and facial recognition software as well as a facial recognition server through summer 2020.”
The department scrambled to dismantle the software after the ban, but the department’s secretive approach remains problematic. The very fact that the San Francisco police department was able to acquire and apply facial recognition technology without public oversight is troubling.The city’s current restrictions offer a stumbling block by limiting acceptance of surveillance culture as a normal part of everyday life — and prevent us from automatically reaching for it as a solution during times of panic.
A stumbling block, however, is not an outright barricade. Currently, San Francisco is under a shelter-in-place mandate; as of April 6, it had a reported 583 confirmed cases and nine deaths. If the situation worsens, could organizers suggest that the city make an exception and use facial recognition tracking to flatten the curve, just this once? It’s a long-shot hypothetical, but it does lead us to wonder what could happen if we allow circumstances to convince us into surveillance culture, one small step at a time.
Bans can only do so much. While the San Francisco ruling proves that Scalzo’s claim that “Laws have to determine what’s legal, but you can’t ban technology” isn’t strictly speaking correct, the sentiment behind it remains. Circumstances can compel us into considering privacy-eroding tech even as those explorations lead us down a path to dystopia.
So, in a way, Scalzo is right; the proliferation of facial recognition technology is inevitable. But that doesn’t mean that we should give up on bans and protective measures. Instead, we should pursue them further and slow the momentum as much as we can — if only to give ourselves time to establish regulations, rules, and protections. We can’t give in to short-term thinking; we can’t start down the slippery slope towards surveillance culture without considering the potential consequences. Otherwise, we may well find that the “cure” that facial recognition promises is, in the long term, far worse than any short-term panic.
Originally published on Hackernoon.com
Robots might take our jobs, but they (probably) won’t replace our wordsmiths.
These days, concerns about the slow proliferation of AI-powered workers underly a near-constant, if quiet, discussion about which positions will be lost in the shuffle. According to a report published earlier this year by the Brookings Institution, roughly a quarter of jobs in the United States are at “high risk” of automation. The risk is especially pointed in fields such as food service, production operations, transportation, and administrative support — all sectors that require repetitive work. However, some in creatively-driven disciplines feel that the thoughtful nature of their work protects them from automation.
Journalism, novel-spinning, and poetry all live within the one creative bastion that we believe AI can’t possibly disrupt or infiltrate. And, to be fair, writing bots haven’t exactly proven themselves to be fonts of literary prowess. AI writing tends to live within the absurd and verge on the cringeworthy. Take Harry Potter and What Looked Like a Giant Pile of Ash, a chapter of an unofficial novel written by Botnik Studios’ predictive AI, as an example. The bot writes; “Leathery sheets of rain lashed at Harry’s ghost. Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat Hermione’s family.”
It’s certainly a memorable passage — both for its utter lack of cohesion and its familiarity. The tone and language almost mimic J.K. Rowling’s style — if J.K. Rowling lost all sense and decided to create cannibalistic characters, that is. Passages like these are both comedic and oddly comforting. They amuse us, reassure us of humans’ literary superiority, and prove to us that our written voices can’t be replaced — not yet.
However, not everything produced by AI is as ludicrous as A Giant Pile of Ash. Some pieces veer on the teetering edge of sophistication. Journalist John A. Tures experimented with the quality of AI-written text for the Observer. His findings? Computers can condense long articles into blurbs well enough, if with errors and the occasional missed point. As Tures described, “It’s like using Google Translate to convert this into a different language, another robot we probably didn’t think about as a robot.” It’s not perfect, he writes, but neither is it entirely off the mark.
Moreover, he notes, some news organizations are already using AI text bots to do low-level reporting. The Washington Post, for example, uses a bot called Heliograf to handle local stories that human reporters might not have the time to cover. Tures notes that these bots are generally effective at writing grammatically-accurate copy quickly, but tend to lose points on understanding the broader context and meaningful nuance within a topic. “They are vulnerable to not understanding satires, spoofs or mistakes,” he writes.
And yet, even with their flaws, this technology is significantly more capable than those who look only at comedic misfires like A Giant Pile of Ash might believe. In an article for the Financial Times, writer Marcus du Sautoy reflects on his experience with AI writing, commenting, “I even employed code to get AI to write 350 words of my current book. No one has yet identified the algorithmically generated passage (which I’m not sure I’m so pleased about, given that I’m hoping, as an author, to be hard to replace).”
Du Sautoy does note that AI struggles to create overarching narratives and often loses track of broader ideas. The technology is far from being able to write a novel — but still, even though he passes off his perturbance at the AI’s ability to fit perfectly within his work as a literal afterthought, the point he makes is essential. AI is coming dangerously close to being able to mimic the appearance of literature, if not the substance.
Take Google’s POEMPORTRAITS as an example. In early spring, engineers working in partnership with Google’s Arts & Culture Lab rolled out an algorithm that could write poetry. The project leaders, Ross Goodwin and Es Devlin, trained an algorithm to write poems by supplying the program with over 25 million words written by 19th-century poets. As Devlin describes in a blog post, “It works a bit like predictive text: it doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model.”
When users donate a word and a self-portrait, the program overlays an AI-written poem over a colorized, Instagrammable version of their photograph. The poems themselves aren’t half bad on the first read; Devlin’s goes: “This convergence of the unknown hours, arts and splendor of the dark divided spring.”
As Devlin herself puts it, the poetry produced is “Surprisingly poignant, and at other times nonsensical.” The AI-provided poem sounds pretty, but is at best vague, and at worst devoid of meaning altogether. It’s a notable lapse, because poetry, at its heart, is about creating meaning and crafting implication through artful word selection. The turn-of-phrase beauty is essential — but it’s in no way the most important part of writing verse. In this context, AI-provided poetry seems hollow, shallow, and without the depth or meaning that drives literary tradition.
In other words, even beautiful phrases will miss the point if they don’t have a point to begin with.
In his article for the Observer, John A. Tures asked a journalism conference attendee his thoughts on what robots struggle with when it comes to writing. “He pointed out that robots don’t handle literature well,” Tures writes, “covering the facts, and maybe reactions, but not reason. It wouldn’t understand why something matters. Can you imagine a robot trying to figure out why To Kill A Mockingbird matters?”
Robots are going to verge into writing eventually — the forward march is already happening. Prose and poetry aren’t as protected within the creative employment bastion as we think they are; over time, we could see robots taking over roles that journalists used to hold. In our fake-news-dominated social media landscape, bad actors could even weaponize the technology to flood our media feeds and message boards. It’s undoubtedly dangerous — but that’s a risk that’s been talked about before.
Instead, I find myself wondering about the quieter, less immediately impactful risks. I’m worried that when AI writes what we read, our ability to think deeply about ourselves and our society will slowly erode.
Societies and individuals grow only when they are pushed to question themselves; to think, delve into the why behind their literature. We’re taught why To Kill a Mockingbird matters because that process of deep reading and introspection makes us think about ourselves, our communities, and what it means to want to change. In a world where so much of our communication is distilled down into tweet-optimized headlines and blurbs, where we’re not taking the time to read below a headline or first paragraph, these shallow, AI-penned lines are problematic — not because they exist, but because they do not spur thought.
“This convergence of the unknown hours, arts and splendor of the dark divided spring.”
The line sounds beautiful; it even evokes a vague image. Yet, it has no underlying message — although, to be fair, it wasn’t meant to make a point beyond coherency. It isn’t making a point. It’s shallow entertainment under a thin veil of sophistication. It fails at overarching narratives, doesn’t capture nuance, and fails to grasp the heartbeat of human history, empathy, and understanding.
If it doesn’t have that foundation to create a message, what does it have? When we get to a place where AI is writing for us — and be sure, that time will come — are we going to be thinking less? Will there be less depth to our stories, minimal thought inspired by their twists? Will it become an echoing room rather than a path forward? At the very least, will these stories take over our Twitter feeds and Facebook newsfeeds, pulling us away from human-crafted stories that push us to think?
I worry that’s the case. But then again, maybe I’m wrong — maybe reading about how an AI thinks that Ron ate Hermione’s family provides enough dark and hackneyed comedy to reassure our belief that AI will never step beyond its assigned role as a ludicrous word-hacker.
For now, at least.
In recent years, smart devices have served as futuristic windows into new (and shoppable) consumer landscapes. The glimpses that tech offers allow us to put imagination aside and bring potential purchases into our lives for a trial run — virtually.
The opportunities are near-endless; rather than order glasses online and hope for a good fit, customers at Warby Parker can assess their favorite lenses with a quick selfie. Instead of lacing and unlacing countless pairs of shoes in-store, Nike shoppers can scan their feet and find a perfect fit by “trying on” their favorite products virtually.
Even in-store dressing rooms have their digital twins. At the Gap, customers can pick from five common body types to see how their favorite new styles will look on them without the time-consuming hassle of cycling through several outfits.
Augmented reality — technology that facilitates digital additions to real-world images — is slowly becoming an accepted part of the retail experience. AR allows us to visualize the goods we see in-store and online within our day-to-day environment. In a way, the tech’s capabilities speak to the heart of retail. Like store displays and flashy product photos, AR-powered apps help consumers visualize potential purchases within their home environments and daily routines — and even convince them to buy.
Currently, AR tech is still somewhat of a novelty. However, it seems likely that AR will evolve into an everyday aspect of retail shopping within a few short years. Researchers for Gartner found that 46 percent of surveyed retailers intended to deploy AR or VR customer experience solutions by 2020, and estimated that a whopping 100 million consumers would shop in AR online and in-store by the same year. Along the same lines, Goldman Sachs estimates that the global market for VR and AR in retail will top $1.6 billion by 2025.
However, the number of AR-powered shoppers is impressive even today. Earlier this year, eMarketer released a report that quantified the number of consumers who would use AR more than once a month at 68.7 million people, or 20.8 percent of the US population. The report points to increased familiarity with and interest in the technology as a significant driver behind the AR retail boom. One major source of that interest, the researchers write, was Pokémon Go.
While it would be oversimplifying to say that Pokémon Go sparked retail’s AR revolution, it wouldn’t be entirely incorrect, either. The virally-successful game served as many consumers’ first introduction to the technology’s engaging capabilities. When the game first exploded onto the market in 2016, it was all but commonplace to see people paused on sidewalks, furiously tapping their screen in an attempt to capture a digital creature.
It was the first wildly successful AR game. Unlike other smartphone apps, Pokémon Go superimposed its challenges over a real-world map of its user’s location, creating an immersive and novel experience for players. Apptopia estimates that at its peak, the game had 100 million users worldwide. It introduced countless people to the idea of integrating augmented reality into their daily lives — and sparked a few conversations among investors, too.
Soon after the game’s debut, CNBC reporters quoted Cowen & Co. analyst Oliver Chen as saying that Pokémon Go had the power to transform retail. As Chen explains, “The new free-to-play [augmented reality] gaming app has broad implications for retail as it addresses declining mall traffic, plus emerging trends toward social experience and health [and] wellness. [The game] illustrates how augmented reality could potentially play a more significant role in retail over time.”
Pokémon Go’s heyday has long passed us by, but the transformational potential of AR for retail remains. Partly because of the game’s popularity, AR applications have become increasingly common and accessible. Moreover, as analysts for the above eMarketer study point out, support for AR development is on the rise.
“The introduction of Apple’s ARKit and Google’s ARCore software development kits (SDKs) in 2017 signaled the tech industry’s confidence in—and ongoing support of—AR experiences,” the researchers write. “This is spurring developers to accelerate activity and create more applications.”
So, what benefits could these new, retail-focused AR applications bring? In theory, AR products could gamify the shopping experience, pique interest in products, promote in-store foot traffic, and improve customer engagement. The last is particularly important; in an age where online shopping is not-so-subtly encroaching on traditional stores, retailers face increased pressure to better engage customers by redefining shopping trips into shopping experiences.
AR presents a means to do so. Statistics provided by Retail Perceptions indicate that 61 percent of surveyed shoppers prefer to shop at stores that offer retail experiences, 71 percent would return more often if AR was available, and 40 percent would pay more for a product if they could try it out in AR first.
AR gives retailers the opportunity to boost consumer engagement, make shopping more of an experience than a chore, and create a more personalized digital experience for customers. In some cases, AR-powered ads can even establish stronger touchpoints on social media platforms. Where consumers might have zipped past a traditional ad without a thought, the interactive nature of AR encourages platform users to pause their scrolling and engage with the advertisement — thereby making it more likely that they will check out or even buy a product.
The shift to AR is already well underway. In the summer, L’Oreal Armani Beauty announced its intention to be the first beauty brand to integrate AR capabilities into its WeChat application. It has a new take on the virtual dressing room; in China, consumers will be able to virtually try out cosmetics and share their screenshots on social media. For L’Oreal, AR tech will create an opportunity for better customer experience, sales, and consumer-generated marketing all via one app.
If this announcement demonstrates anything, it would be that despite its relative nascence, AR solutions in retail are continually evolving. Today, those tools merge digital and retail capabilities, providing a means for companies to expand their stores through a camera lens.
It will be interesting to see what new retail opportunities will bloom from AR’s growth next.
Originally published on Disrupt Magazine
When judged from a distance, Dubai doesn’t exactly embody a shining example of sustainability.
The most populous city in the United Arab Emirates is a glittering, luxurious metropolis in the middle of a barren desert. It demands power and burns resources as-needed — and it needs a breathtaking amount. According to Reuters, three-quarters of all electricity produced in the UAE is used to cool buildings across the emirates and ensure that residents stay fresh, thriving and entertained.
The use of those resources can be mind-boggling. Consider Ski Dubai as an example; with just a short trip downtown, city residents can trade their light clothing and sunglasses for ski parkas and snow boots and revel in wintertime fun. On Ski Dubai’s indoor mountain, air conditioners work around the clock to maintain the slopes’ winter-wonderland illusion in a place where summer temperatures routinely top 113 degrees Fahrenheit.
For National Geographic journalist Robert Kunzig, Dubai’s ski slopes are just a symbol of Dubai’s unsustainable approach. “Dubai burns far more fossil fuel to air-condition its towers of glass,” he writes. “To keep the taps running in all those buildings, it essentially boils hundreds of Olympic pools worth of seawater every day. And to create more beachfront for more luxury hotels and villas, it buried coral reefs under immense artificial islands.”
And yet, despite the almost comic lack of sustainable thought that these day-to-day realities reveal, Dubai might just be on-track to outpace Western hubs in their collective race towards a green future. Only 15 miles away from the resource-gobbling slopes of Ski Dubai, a new — even opposing — philosophy has laid its first metaphorical bricks.
Now for something completely different
Dubai’s Sustainable City is an icon of sustainability. First established in 2015, the $354 million mega project aims to limit its negative environmental impact as much as possible and to become a net-zero settlement that produces all of the (renewable) energy it needs to run day-to-operations.
Today, the community encompasses 113 acres, 500 villas and over 3,000 residents. Every home in the settlement has a solar panel on its roof. Residents are permitted to use public transport and electric vehicles; however, gasoline-powered cars are banned outright from most neighborhoods. Rather than offering traditional fuel stations, the community hosts charging stops.
The environmental benefits of these and other sustainability-minded designs are inarguable. Analysts indicate that the average annual water consumption in Sustainable City is roughly 40 percent lower than an equivalent metric in Dubai proper. Similarly, electricity costs for the settlement are a whopping 40 percent less than the city’s green building standards require. According to a Khaleej Times report, the settlement has limited its carbon footprint by 150 tons of carbon dioxide per year by using biodiesel during construction.
The community’s approach to sustainable living also has had a significant impact on waste management. As writers for Energy Central recently reported, “Because of recycling, the average waste generation at [the Sustainable City] is only 1.17 kilograms per person per day, which is 60 percent lower than the average. With this, [the city] has successfully diverted 85 percent of its waste from landfills.”
Taken together, these reports prove that a sustainability mindset can power an urban community — literally. Its very existence pushes those of us overseas to wonder if similar projects might find the same success in our backyards.
“The Sustainable City cannot end here,” Karim el-Jisr, chief innovation officer for the Sustainable City Institute, told Euronews last February. “Unless we see another 1,000 Sustainable Cities, we will not make a difference to the planet. A true measure of our success is not what you see [in Dubai], but […] replication by others and by ourselves.”
So, this begs the question — if a sustainable community can spring up inside a city as notoriously environmentally unfriendly as Dubai, shouldn’t a similar project work near a city such as New York City, which is already relatively green?
A tale of two cities
Unfortunately, the issue isn’t that simple. While New York undoubtedly could benefit from the car-free neighborhoods, energy-efficient buildings and recycling-centric resource management policies that the Sustainable City model offers, the likelihood of such a community springing up in the five boroughs is close to nil.
Unlike Dubai, New York City is already heavily built up. While Dubai has the space — and resources — to construct an environmentally friendly neighborhood from the ground up, New York definitively does not. The space issue aside, Dubai has put years of effort and hundreds of millions of dollars in public and private funding towards building Sustainable City. Needless to say, New York City is not in a position to contribute the same.
As Alessandro Melis, an architecture professor at the University of Portsmouth in the United Kingdom, put the matter for Reuters, “[Projects such as Sustainable City] are good experiments that can tell us many things, but at this moment in time it would be more important to focus on how we can transform the urban fabric that we already have.”
New York won’t be able to host a separate community the way that Dubai can — but it may be able to make a similarly effective change from within its existing framework.
It is worth noting that the city already has a robustly sustainable foundation; in 2016, New York ranked as the 26th most sustainable city in the world on the Arcadis sustainable index. This ranking is partially due to the city’s robust public transportation system and existing sustainability measures.
However, more can and should be done. It seems likely New York will undergo a sustainability retrofit in upcoming years, especially given recent legislative moves. Last year, the city passed the Climate Mobilization Act, which, as of last month will mandate that all buildings larger than 25,000 square feet post their energy efficiency grades near public entryways. In 2024, these rules will become even stricter, imposing fines on those that fall below a certain grade.
Writers for City Lab further report that the Climate Mobilization Act will institute a “Fossil Fuel Cap.” They write, “The cap will require buildings to be upgraded or retrofitted with things like more energy-efficient heaters and boilers, as well as solar panels and windows that reduce heat loss in the winter and heat gain in the summer.”
Taken together, this new legislation shifts the responsibility — at least in part — for reimagining a sustainable New York onto property owners. This choice poses a few challenges; unlike in Dubai, where efforts were coordinated and funded under one overarching vision for a sustainable community, New York’s sustainability efforts will be moved forward by a disjointed cohort of property owners as they abide by new legislation. It’s an ironically inefficient means to go about achieving sustainability, even if the government does offer some financial incentives to adopt sustainable building practices (PDF).
However, these roadblocks will not stop New Yorkers from achieving sustainability on par with Dubai’s Sustainable City. While a lack of resources and space prevents New York from mimicking the UAE’s cohesive approach to building sustainable communities, it does have the ability to retrofit and innovate within its existing urban framework.
Unlike Dubai, New York’s sustainability efforts won’t be an addition to its borders, but an evolution within them. It will be different, for sure, and perhaps take longer to achieve its sustainability goals — but the end result will be no less beneficial to the environment.
Originally published on GreenBiz
Most consumers aren’t in the habit of shopping in stores they dislike — and who could blame them? After all, a store visit requires them to leave their homes, sit in traffic, and physically search for the products that they want. Why would they go to the effort that traditional shopping requires if they know that the employees they meet will be rude, the store layout confusing, and the overall experience frustrating?
The truth of the matter is that if your customers find their time in-store underwhelming, the quality of your product won’t matter. They will go elsewhere — and in the digital age, “elsewhere” usually means “online.”
Online shopping has ushered in an entirely new definition for consumer convenience in retail. Digital experiences are painstakingly personalized; as one writer for the 2019 Store Experience Study describes, “Since the rise of the Internet, retailers have done an exemplary job of making their customers feel like royalty each and every time they were engaged online. Retailers made it easy to find items (which were always in stock), knew their purchase history, knew their likes and dislikes, and made suggestions for complementary or supplementary products.”
The cherry on top, of course, is that once a customer swipes, clicks, and taps their way through the ordering process, they can expect their chosen product to arrive at their doorstep within a few short days.
When faced with that degree of convenience, how can traditional brick-and-mortar retailers compete? Younger consumers have already enthusiastically taken to online shopping; according to statistics from Kinsta, roughly 67% of millennials and 56% of Gen X’ers prefer buying online to browsing a physical store.
While online shopping might have claimed top marks for convenience, brick-and-mortar stores are on the verge of reclaiming customers on the basis of shopping experience. If retailers can prove to their consumers that an hour-long store visit is an entertaining diversion, rather than a chore, they may have the chance to redefine consumer expectations for in-person shopping and revitalize the brick-and-mortar landscape.
This shift is already underway. According to the Store Experience Study mentioned above, retailers have begun focusing on personalizing customer experience (51%), empowering store associates to provide better service (48%), and rethinking the design of the in-store customer journey (31%) — all to boost consumers’ interest in in-store shopping. This new strategic focus has likely contributed to the recent uptick of brick-and-mortar performance. As of 2018, retail sales had risen nearly 6% since the year before, and roughly 3,600 net new stores had opened their doors.
The answer is clear: retailers need to recast the shopping experience as a form of convenient entertainment, rather than a digitally-avoidable trip. Below, I’ve listed a few ways to do so.
Rethink the Purpose of a Shopping Trip
When consumers visit stores in the future, they should come out of interest — not necessity. Retailers need to incorporate entertainment and engagement elements into their customer journey designs.
Consider J.C. Penney’s recent redesign of a store in Hurst, Texas, as an example. Described by the company’s leaders as “experiential at its core,” the redesign incorporates service offerings into its lifestyle brand. Now, customers can not only purchase makeup in the beauty section but also undergo a workshop on how to achieve model-perfect looks while in-store. Similarly, the store has begun offering fitness classes alongside their activewear brands.
The department store also recently launched a partnership with Pinterest to help shoppers better explore the store’s offerings during a home decor refresh. Using J.C. Penney’s in-store tool, shoppers can answer a few simple questions and see a curated Pinterest board that lists J.C. Penney products suited to their needs and tastes.
It is too early to know if this revolutionary approach to experiential shopping will pay off. If it does, however, J.C. Penney may have created an innovative blueprint for future department stores.
Meld Digital Convenience With In-Store Experience
Brick and mortar stores may not be defined by technology, but they can certainly benefit from it. By weaving technology into their in-store experiential strategy, traditional retailers can provide the same convenience that e-retailers pride themselves on offering. With an app, for example, retailers could theoretically allow consumers to schedule a fast pickup, repay online, check to see if a given item is in stock, or even access exclusive digital coupons. Such a strategy would make in-store shopping nearly as quick and convenient as online shopping — if not more so, given that consumers don’t need to wait for their item to be delivered.
In recent years, it has become increasingly clear that the digital age won’t spell the end for traditional retail. Instead, it will challenge brick-and-mortar stores to reinvent the in-person shopping experience to be more engaging, entertaining, and convenient than ever before.
Originally published on ScoreNYC
Today, a customer’s entire experience with a company — from first inquiry to final transaction — can and often does occur entirely online. Many consumers seem to prefer it that way, too. According to data collected by Nextiva, 57% of surveyed respondents said that they would rather contact companies via email or social media, instead of by voice-based customer support.
The shift to a tech-savvy business foundation isn’t just convenient for consumers — it comes with considerable benefits for businesses too. In one report published by Juniper Research, analysts projected that automated systems would save a collective $8 billion in customer support costs by 2022. That’s a compelling financial argument for businesses. Smart Insights estimates that 34% of companies have already undergone a digital transformation, while researchers for Seagate anticipate that over 60% of CEOs globally will begin prioritizing digital strategies to improve customer experience by the end of this year.
Integrating technology into our day-to-day business operations is a no-brainer, given its potential to lessen costs, boost convenience, and improve consumer experiences. However, in our race to meet the digital age, we may be leaving one critical aspect of customer service behind: human connection.
As much as consumers appreciate the convenience and speed that digital tools and systems facilitate, they also need to feel a genuine human connection and know that there are people behind the AI-powered customer hotline. As one writer puts the matter in an article for Business Insider, “A satisfactory customer experience depends on how well a company can relate to a customer on an emotional level. To create memorable experiences, employees who are curious and have a genuine desire to assist the customers can set brands apart.”
Business consultant Chris Malone and social psychologist Susan T. Fiske researched this emotional connection for their book, The Human Brand: How We Relate to People, Products, and Companies. In the text, they write that consumers gravitate towards companies similarly to how they flock to friends; if they perceive a business as being emotionally warm and welcoming, but not particularly competent, they will still enjoy the experience and like the brand. In contrast, if a company is competent but cold in its customer engagement, consumers tend to visit only when circumstances demand it.
The ideal, they say, is for companies to be both warm and competent. Within the context of our digitally-driven world, striking that balance means integrating consumer service technology without wholly excising human personality.
Businesses need to identify when their AI-powered chatbot or customer service channel crosses over the line from useful to canned or frustrating. Sometimes, a robot voice just isn’t helpful enough; according to statistics published by American Express, 40% of customers prefer talking to a human service representative when they struggle with complex problems. Consumers should have the means to reach out to human representatives if they can’t solve their problems through automated channels.
People want to connect with a brand that has personality, voice, and empathy — even across digital channels. Social media platforms have evolved into significant touchpoints for businesses today; researchers for Nextiva estimate that over 35% of consumers in the U.S. reached out to a business through social media in 2017. Their expectations, too, are significant — 48% of the customers who contact a company via social media expect a response to their questions or complaints within 24 hours. Even so, a cold, automated response may be just as ineffective as no response at all.
Emerging technology is not a be-all, end-all, unquestionable solution to our problems. Businesses need to treat digital channels with all the personality, empathy, and care that they would offer during a client call. If they rely on canned responses or AI bots, they may find their consumer pool shrinking as customers defect for companies with more perceived warmth.
Technology is convenient, yes — however, the convenience it creates should never come at the cost of human connection.
Originally published on ScoreNYC
Bennat Berger is an NYC-based tech writer, investor, and entrepreneur. He is the founder of Novel Property Ventures, a company that specializes in finding, acquiring, and managing high-potential multifamily residential units in New York City. Berger is also the founder of Novel Private Equity, a private equity firm that gives tech startups the support they need to thrive in an increasingly competitive business market.