We Can’t Afford to Sell Out on AI Ethics

Today, we use AI with the expectation that it will make us better than we are — faster, more efficient, more competitive, more accurate. Businesses in nearly every industry apply artificial intelligence tools to achieve goals that we would, only a decade or two ago, derided as moonshot dreams. But even as we incorporate AI into our decision-making processes, we can never forget that even as it magnifies our capabilities, so too can it plainly show our flaws.

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included,” AI researcher Kate Crawford once wrote for the New York Times. “Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with it sold, familiar biases and stereotypes.”

The need for greater inclusivity and ethics-centric research in AI development is well-established — which is why it was so shocking to read about Google’s seemingly senseless firing of AI ethicist Timnit Gebru.

For years, Gebru has been an influential voice in AI research and inclusivity. She cofounded the Black in AI affinity group and speaks as an advocate for diversity in the tech industry. In 2018, she co-wrote an oft-cited investigation into how gender bias influenced Google’s Image Search results. The team Gebru built at Google encompassed several notable researchers and was one of the most diverse working in the AI sector.

“I can’t imagine anybody else who would be safer than me.” Gebru shared in an interview with the Washington Post. “I was super visible. I’m well known in the research community, but also the regulatory space. I have a lot of grass-roots support — and this is what happened.”

So what, exactly, happened?

In November of 2020, Gebru and her team concluded a research paper that examined the potential risks inherent to large language-processing models, which can be used to discern basic meaning from text and, in some cases, create new and convincing copy.

Gebru and her team found three major areas of concern. The first was environmental; relying on large language models could lead to a significant increase of energy consumption and, by extension, our carbon footprint.

The second related to unintended bias; because large language models require massive amounts of data mined from the Internet, “racist, sexist, and otherwise abusive language” could accidentally be included during the training process. Lastly, Gebru’s team pointed out that as large language models become more adept at mimicking language, they could be used to manufacture dangerously convincing misinformation online.

The paper was exhaustively cited and peer-reviewed by over thirty large-language-model experts, bias researchers, critics, and model users. So it came as a shock when Gebru’s team received instructions from HR to either retract the paper or remove the researchers’ names from the submission. Gebru addressed the feedback and asked for an explanation on why retraction was necessary. She received no response other than vague, anonymous feedback and further instructions to retract the paper. Again, Gebru addressed the feedback — but to no avail. She was informed that she had a week to rescind her work.

The back and forth was exhausting for Gebru, who had spent months struggling to improve diversity and advocate for the underrepresented at Google. (Only 1.9 percent of Google’s employee base are Black women.) To be silenced while furthering research on AI ethics and the potential consequences of bias in machine learning felt deeply ironic.

Frustrated, she sent an email detailing her experience to an internal listserv, Google Brain Women and Allies. Shortly thereafter, she was dismissed from Google for “conduct not befitting a Google Manager. Amid the fall out, Google AI head Jeff Dean claimed that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” that undermined the risks she outlined — a shocking accusation, given its breadth of research.

To Gebru, Google’s reaction felt like corporate censorship.

“[Jeff’s email] talks about how our research [paper on large language models] had gaps, it was missing some literature,” she told MIT’s Technology Review. “[The email doesn’t] sound like they’re talking to people who are experts in their area. This is not peer review. This is not reviewer #2 telling you, ‘Hey, there’s this missing citation.’ This is a group of people, who we don’t know, who are high up because they’ve been at Google for a long time, who, for some unknown reason and process that I’ve never seen ever, were given the power to shut down this research.”

“You’re not going to have papers that make the company happy all the time and don’t point out problems,” Gebru concluded in another interview for Wired. “That’s antithetical to what it means to be that kind of researcher.”

We know that diversity and research is crucial to the development of truly effective and unbiased AI technologies. In this context, firing Gebru — a Black, female researcher with extensive accolades for her work in AI ethics — for doing her job is senseless. There can be no other option but to view Google’s actions as corporate censorship.

For context — in 2018, Google developed BERT, a large language model, and used it to improve its search result queries. Last year, the company made headlines by creating large-language techniques that would allow them to train a 1.6-trillion-parameter model four times as quickly as previously possible. Large language models offer a lucrative avenue of exploration for Google; having them questioned by an in-house research team could be embarrassing at best, and limiting at worst.

In an ideal world, Google would have incorporated Gebru’s research findings into its actions and sought ways to mitigate the risks she identified. Instead, they attempted to compel her to revise her document to include cherry-picked “positive” research and downplay her findings. Think about that for a moment — that kind of interference is roughly analogous to a pharmaceutical company asking researchers to fudge the statistics on a new drug’s side effects. Such intervention is not only unethical; it leads to the possibility of real harm.

Then, when that interference failed, Google leadership worked to silence and discredit Gebru. As one writer for Wired concludes, that decision proves that “however sincere a company like Google’s promises may seem — corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.”

Gebru is an undeniably strong person, an authority in her field with a robust support network. She had the force of personality to stand her ground against Google. But what if someone who wasn’t quite as well-respected, supported, and brave stood in her position? How much valuable research could be quashed due to corporate politicking? It’s a frightening thought.

The Gebru fallout tells us in no uncertain terms that we need to give real consideration to how much editorial control tech companies should have over research, even if they employ the researchers who produce it. If left unchecked, corporate censorship could stand to usher in the worst iteration of AI: one that writs large our biases, harms the already-underserved, and dismisses fairness in favor of profit.

This article was originally published on BeingHuman.ai

By |2021-04-16T18:56:58+00:00April 16th, 2021|Technology|

Is Unequal Access to Data Undermining Your Company’s Success?

Big data has far and away transcended its status as a technology buzzword. It has become a full-fledged infrastructural norm; countless business leaders have embraced its potential to provide enhanced insight, trend discovery, and other key variables contributing to their annual goals. This notion has created the need for multifaceted implementation strategies which, ideally, aim to use data as a binding agent for all company sectors, ultimately streamlining internal fluidity.

However, despite their ambition and openness to change, many of these leaders fail to recognize that their implementation strategies are flawed — namely in terms of distribution and accessibility. In turn, these inconsistencies can foster a culture of inequality and opacity, creating a counter effect and undermining the very success the strategy strives to achieve.

To curb these setbacks, organizations must be proactive in expanding data knowledge and utilization equally across their different departments.

Diagnosing the problem

To establish a stable data landscape, business leaders need to identify both the internal problem at hand and its broader implications. Limited data distribution should be viewed not only as a threat to corporate functions, but also a potential slight to certain divisions of the organization’s workforce.

The pitfalls of such disparities have already been carefully observed at a societal level, even before the pandemic took them to new heights, and to introduce them to the workplace is to court slow-burning disarray. Inner turmoil can quickly lead to poor external performance, causing companies to fall behind competitors that are more internally cohesive.

With the proper mindset in place, leaders can turn their attention to a variety of strategies to nip their data problem in the bud, and this begins with pinpointing where deficiencies lie. For instance, are employees reliant upon a “suboptimal mix of cloud-based technology and on-premise enterprise systems,” where the collective workforce is hamstrung by patchy, insufficient access — as nearly two-thirds of companies report — or is quality access simply limited to specific parts of the company? Organizations will need to assess workers’ level of “data illiteracy,” a consequence that occurs when data-driven decision-making is limited to select departments and teams.

Leaders must also recognize that the issue can spread beyond data access alone, impacting a company’s confidence in investing and technological innovation. For example, if the company empowers non-IT departments with the bulk of its data technology investment authority, IT workers may start to feel disenfranchised and contribute to a general sense of confusion and mismatched priority. These so-called “shadow systems” are not sustainable because they confuse expectations and leave some workers ill-equipped to address problems that would otherwise be in their wheelhouse.

Creating long-term success

With remote work enduring as a new norm, emphasis on data and technology is arguably at an all-time high, and the need for a tight digital ship has followed suit. Therefore, solutions to the above should be handled with diligence, and it is important to remember that seemingly cut-and-dry remedies are anything but; simply making data access more widespread is not the full answer. Instead, to create lasting success, a broader systemic change should be favored over a temporary band-aid.

By focusing on total reinvention, leaders will be able to properly address each micro-issue contributing to the macro flaw. These focal points could include better, more equal funding to multiple departments, stronger team integration to optimally disperse data knowledge and learning opportunities, and reallocation of investing dollars to reflect future innovation rather than retroactive level setting.

These efforts can also be applied to the introduction (or updating) of relative technology aimed at an improved data-driven work cycle. Access to AI and automation tools, for instance, should be evenly distributed to all applicable company fields — with training provided for those unversed in how to use these resources. Success in each of these areas will be contingent upon properly communicated expectations.

Regardless of where change is most needed, a general rule of thumb is to isolate growth areas that require a rapid return and use them as a kicking-off point. The current system should be audited based on its existing depth and reach, and any salvageable aspects can be leveraged during the construction of a stronger, more efficient successor. New infrastructure must also remain compatible with the business’s technological and financial capabilities.

This type of large-scale change may seem daunting, even unreachable, but it has become an objective necessity as COVID continues to rewrite the rulebook for businesses worldwide. That said, the challenge can be met head-on with a blend of forward-thinking, unfailing commitment, and, above all, constant transparency and attention to detail.

This article was originally published on Business2Community

By |2021-03-31T23:31:04+00:00February 13th, 2021|Business, Technology|

CEOs: AI Is Not A Magic Wand

The technology holds great promise, no question—but deployment must be done strategically, and with the understanding that you likely won’t see gains on its first attempt to integrate.

If you achieve the improbable often enough, even the impossible stops feeling quite so out of reach.

Over the last several decades, artificial intelligence has permeated almost every American business sector. Its proponents position AI as the tech-savvy executive leader’s magic wand — a tool that can wave away inefficiency and spark new solutions in a pinch. Its apparent success has winched up our suspension of disbelief to ever-loftier heights; now, even if AI tools aren’t a perfect fix to a given challenge, we expect them to provide some significant benefit to our problem-solving efforts.

This false vision of AI’s capability as a one-size-fits-all tool is deeply problematic, but it’s not hard to see where the misunderstanding started. AI tools have accomplished a great deal across a shockingly wide variety of industries.

In pharma, AI helps researchers home in on new drugs solutions; in sustainable agriculture, it can be used to optimize water and waste management; and in marketing, AI chatbots have revolutionized the norms of customer service interactions and made it easier than ever for customers to find straightforward answers to their questions quickly.

Market research provides similar backing to AI’s versatility and value. In 2018, PwC released a report which noted that the value derived from the impact of AI on consumer behavior (i.e., through product personalization or greater efficiency) could top $9.1 trillion by 2030.

McKinsey researchers similarly note that 63 percent of executives whose companies have adopted AI say that the change has “provided an uptick in revenue in the business areas where it is used,” with respondents from high performers nearly three times likelier than those from other companies to report revenue gains of more than 10 percent. Forty-four percent say that the use of AI has reduced costs.

Findings like these paint a vision of AI as having an almost universal, plug-and-play ability to improve business outcomes. We’ve become so used to AI being a “fix” that our tendency to be strategic about how we deploy such tools has waned.

Earlier this year, a joint study conducted by the Boston Consulting Group and MIT Sloan Management Review found that only 11 percent of the firms that have deployed artificial intelligence sees a “sizable” return on their investments.

This is alarming, given the sheer volume that investors are putting into AI. Take the healthcare industry as an example; in 2019, surveyed healthcare executives estimated that their organizations would invest an average of $39.7 million over the following five years. To not receive a substantial return on that money would be disappointing, to say the very least.

As reported by Wired, the MIT/BCG report “is one of the first to explore whether companies are benefiting from AI. Its sobering finding offers a dose of realism amid recent AI hype. The report also offers some clues as to why some companies are profiting from AI and others appear to be pouring money down the drain.”

What, then, is the main culprit? According to researchers, it seems to be a lack of strategic direction during the implementation process.

“The people that are really getting value are stepping back and letting the machine tell them what they can do differently,” Sam Ransbotham, a professor at Boston College who co-authored the report, commented. “The gist is not blindly applying AI.”

The study’s researchers found that the most successful companies used their early experiences with AI tools — good or ill — to improve their business practices and better-orient artificial intelligence within their operations. Of those who took this approach, 73 percent said that they saw returns on their investments. Companies who paired their learning mindset with efforts to improve their algorithms also tended to see better returns than those who took a plug-and-play approach.

“The idea that either humans or machines are going to be superior, that’s the same sort of fallacious thinking,” Ransbotham told reporters.

Scientific American writers Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh put Ransbotham’s point another way in an article titled — tellingly — “AI Isn’t the Solution to All of Our Problems.” They write:

“The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied.”

In other words: We need to stop viewing AI as a fix-it tool and more as a consultant to collaborate with over months or years. While there’s little doubt that artificial intelligence can help business leaders cultivate profit and improve their business, their deployment of the technology must be done strategically — and within the understanding that the business probably won’t see the gains it hopes for on its first attempt to integrate AI.

If business leaders genuinely intend to make the most of the opportunity that artificial intelligence presents, they should be prepared to workshop. Adopt a flexible, experimental, and strategic mindset. Be ready to adjust your business operations to address any inefficiencies or opportunities the technology may spotlight — and, by that same token, take the initiative to continually hone your algorithms for greater accuracy. AI can provide guidance and inspiration, but it won’t offer outright answers.

Businesses are investing millions — often tens of millions — in AI technology. Why not take the time to learn how to use it properly?

This article was originally published on ChiefExecutive.net

By |2021-03-31T23:25:21+00:00January 23rd, 2021|Business, Technology|

For Investors, Property Tech Goes Far Beyond a Smart Home

At first listen, the term “property tech” seems to fit comfortably within the context of ultra-luxurious modernism. We picture something at home within sleek glass-and-metal walls and minimalist design. We imagine an -powered abode where the temperature, light, and -connected outlets can be adjusted with a few smartphone taps or an offhand remark, and a security app allows you to video chat doorstep visitors from halfway around the world.

These products align with the average consumer’s idea of residential technology. But for those in the commercial real estate sector, “property tech” has an entirely different definition — one far removed from the realm of modernist homeowners and IoT-enthusiasts. In fact, far from being an unnecessary luxury, property tech stands a good chance of revolutionizing commercial real estate at every point, from development to sales to property management.

Prop Tech: A Promising New Frontier for Commercial Real Estate

As defined by Tech Target,  refers to the “use of information technology (IT) to help individuals and companies research, buy, sell, and manage real estate.” Innovative PropTech solutions are usually designed to facilitate greater efficiency and connectivity in the real estate market, allowing consumers and vendors at all levels to achieve their goals quickly and at high quality. While PropTech capabilities vary widely across products, they tend to fall into three broad categories: smart home, real estate sharing, and .

The first category encompasses the majority of the IoT-powered home devices mentioned at the top of this piece — the smart thermostats, remotely-controlled home systems, and digital security solutions. Real estate sharing refers to online platforms like Airbnb, Redfin, and Zillow, which facilitate the advertisement and sale of real-world properties. The last term is all but self-explanatory; “fintech” references any tool that assists in real estate financial management or transactions.

The potential that PropTech holds to reform the commercial real estate sector is off the charts — and investors know it. According to a recent , global investment in real estate technology netted an incredible $12.6 billion across 347 deals in 2017 alone, $6.5 billion of which funneled directly to U.S.-based companies. Re:Tech researchers further noted that investment trends indicated a great deal of early interest in untested PropTech solutions, with early-stage companies receiving “the lion’s share” of funding dollars.

Early Successes Illustrate High Potential

This flurry of investor interest isn’t without basis. The PropTech sector has seen runaway growth and concrete success in recent years; aside from the evident popularity of digital-forward platforms like Airbnb and Zillow in the rental and buying markets, adoption of smart home technology has reached a fever pitch. Deloitte reports that sensor deployment in real estate is projected to grow at a  and will likely top 1.3 billion in 2020.

Some companies have even incorporated cutting-edge PropTech innovations into their business model to remarkable success. Take the Texas-based real estate investment firm Amherst Holdings as an example. Last year, Forbes profiled  and data modeling during the asset identification process, noting how Amherst used AI not only to discover investment properties, but also to make dozens of offers per day on potentially lucrative homes. The strategy has paid off; today, the investment firm is thriving, and its portfolio encompasses an incredible 16,000 homes across the American Sunbelt region.

New York: A New Sandbox for PropTech Creativity?

Now, however, companies may not need to foray into PropTech testing without support. Last November, New York announced that it would launch a pilot program that would allow PropTech startups to trial their products via NYC’s portfolio of public properties.  in a press release, “The New York City Economic Development Corporation will launch a pilot program that allows companies to implement proof-of-concept property technology products in the city’s 326.1 million square feet of owned and managed real estate.”

“We want to make our buildings available to incentivize the kinds of innovations that you are all out there working on day in and day out,” Vicki Been, the deputy mayor for housing and development, commented. “We want our buildings and our tenants to be helpful to you, and provide a way to test some of the ideas that you are developing so that we can get those ideas out to the market and into buildings even faster.”

In this way, the city is offering itself up as an innovation sandbox, a place where real estate innovators can test and troubleshoot their digital tools to the betterment of all — and especially New Yorkers.

With this philosophy of openness and curiosity comes an opportunity for New York-based real estate players to not only test innovative approaches but put them together into a unified strategy. We’ve all seen companies find significant success by leveraging one variety of PropTech solution. Airbnb thrives in facilitating short-term real estate transactions, Google and Amazon have cornered the smart home market, and Amherst Holdings has established a winning, AI-powered strategy for finding and acquiring assets. Individually, all of these tactics show impressive results — but what could we achieve if we managed to link them together?

The Tools of Today Could Create the RE Strategy of Tomorrow

In theory, the disparate PropTech solutions we see now could be stitched into a seamless strategy. The strategy might progress as follows — real estate operators could use  and  to identify lucrative neighborhoods and home in on investment properties, then apply -powered  to purchase those buildings. Next, they might retrofit their assets to have utility sensors that can ensure optimal utility use and management. These IoT-equipped devices could also better automate the care of a building by notifying owners when a system requires maintenance and providing real-time insights on how tenants .

When linked, these PropTech solutions can , allowing property firms an opportunity to gain better insights into how they can best use, maintain, and improve their asset properties.

The implications for commercial real estate improvement are huge — and, to be clear, this is all available technology. Real estate operators could incorporate PropTech into their strategic workflow today if they wanted. Will that change require some upfront investment and effort? Absolutely — but, as New York’s decision to offer itself as a testing sandbox demonstrates, there is no better time for real estate operators to get ahead of the curve and start crafting unified strategies than right now.

Originally published on 

By |2020-11-20T21:34:55+00:00July 20th, 2020|Business, Current Events, Technology|

Could COVID-19 Kickstart Surveillance Culture?

Several months ago, saying that the “cure” that facial recognition offers is worse than the ills it solves would have seemed hyperbolic. But now, the metaphor has become all too literal — and the medicine it promises isn’t quite so easy to reject when sickness is sweeping the globe.

Even as it depresses economies across the world, the coronavirus pandemic has sparked a new period of growth and development for facial recognition technology. Creators pitch their tools as a means to identify sick individuals without risking close-contact investigation.

In China, the biometrics company Telpo has launched non-contact body temperature measurement terminals that — they claim — can identify users even if they wear a face mask. Telpo is near-evangelical about how useful its technology could be during the coronavirus crisis, writing that “this technology can not only reduce the risk of cross infection but also improve traffic efficiency by more than 10 times […] It is suitable for government, customs, airports, railway stations, enterprises, schools, communities, and other crowded public places.”

COVID-19: A Push Towards Dystopia?

At a surface glance, Telpo’s offerings seem…good. Of course we want to limit the spread of infection across public spaces; of course we want to protect our health workers by using contactless diagnostic tools. Wouldn’t we be remiss if we didn’t at least consider the opportunity?

And this is the heart of the problem. The marketing pitch is tempting in these anxious, fearful times. But in practice, using facial recognition to track the coronavirus can be downright terrifying. Take Russia as an example — according to reports from BBC, city officials in Moscow have begun leveraging the city’s massive network of cameras to keep track of residents during the pandemic lockdown.

In desperate times like these, the knee-jerk suspicion that we typically hold towards invasive technology wavers. We think that maybe, just this once, it might be okay to accept facial recognition surveillance — provided, of course, that we can slam the door on it when the world returns to normal. But can we? Once we open Pandora’s box, can we force it shut again?

In March, the New York Times reported that the White House had opened talks with major tech companies, including Facebook and Google, to assess whether using aggregated location data sourced from our mobile phones would facilitate better containment of the virus. Several lawmakers immediately pushed back on the idea; however, the discussion does force us to wonder — would we turn to more desperate measures, like facial surveillance? How much privacy would we sacrifice in exchange for better perceived control over the pandemic?

Understanding America’s Surveillance Culture Risk

I’ve been thinking about this idea ever since January, when an expose published by the New York Times revealed that a startup called Clearview AI had quietly developed a facial recognition app capable of matching unknown subjects to their online images and profiles — and promptly peddled it to over 600 law enforcement agencies without any public scrutiny or oversight. Clearview stands as a precursor; a budding example of what surveillance culture in America could look like, if left unregulated. One quote in particular sticks in my head.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” David Scalzo, the founder of a private equity firm currently investing in Clearview commented for the Times. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

Scalzo’s offhand, almost dismissive tone strikes an odd, chilling contrast to the gravity of his statement. If facial recognition technology will lead to a surveillance-state dystopia, shouldn’t we at least try to slow its forward momentum? Shouldn’t we at least consider the dangers that a dystopia might pose — especially during times like these, when privacy-eroding technology feels like a viable weapon against global pandemic?

I’m not the only one to ask these questions. Since January’s expose, Clearview AI has come under fire from no fewer than four lawsuits. The first castigated the company’s app for being an “insidious encroachment” on civil liberties; the second took aim both at Clearview’s tool and the IT products provider CDW for its licensing of the app for law enforcement use, alleging that “The [Chicago Police Department] […] gave approximately 30 [Crime Prevention and Information Center] officials full access to Clearview’s technology, effectively unleashing this vast, Orwellian surveillance tool on the citizens of Illinois.” The company was also recently sued in Virginia and Vermont.

All that said, it is worth noting that dozens of police departments across the country already use products with facial recognition capabilities. One report on the United States’ facial recognition market found that the industry is expected to grow from $3.2 billion in 2019 to $7.0 billion by 2024. The Washington Post further reports that the FBI alone has conducted over 390,000 facial-recognition searches across federal and local databases since 2011.

Unlike DNA evidence, facial recognition technology is usually relatively cheap and quick to use, lending itself easily to everyday use. It stands to reason that if better technology is made available, usage by public agencies will become even more commonplace. We need to keep this slippery slope in mind. During a pandemic, we might welcome tools that allow us to track and slow the spread of disease and overlook the dangerous precedent they set in the long-term.

Given all of this, it seems that we should, at the very least, avoid panic-prompted decisions to allow facial recognition — and instead, consider what we can do to avoid the slippery slope that facial recognition technology poses.

Are Bans Protection? Considering San Francisco

In the spring of 2019, San Francisco passed legislation that outright forbade government agencies from using tools capable of facial surveillance — although the ruling was amended to allow for equipped devices if there was no viable alternative. The lawmakers behind the new ordinance stated their reasoning clearly, writing that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits.”

They have a point. Facial recognition software is notorious for its inaccuracy. One new federal study also found that people of color, women, older subjects, and children faced higher misidentification rates than white men.

“One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse,” Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU), told the Washington Post. “But the technology’s flaws are only one concern. Face recognition technology — accurate or not — can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”

While it’s still too early to have a clear gauge on the ban’s efficacy, it is worth noting that the new legislation sparked a few significant and immediate changes to the city’s police department. In December, Wired reported that “When the surveillance law and facial recognition ban were proposed in late January, San Francisco police officials told Ars Technica that the department stopped testing facial recognition in 2017. The department didn’t publicly mention that it had contracted with DataWorks that same year to maintain a mug shot database and facial recognition software as well as a facial recognition server through summer 2020.”

The department scrambled to dismantle the software after the ban, but the department’s secretive approach remains problematic. The very fact that the San Francisco police department was able to acquire and apply facial recognition technology without public oversight is troubling.The city’s current restrictions offer a stumbling block by limiting acceptance of surveillance culture as a normal part of everyday life — and prevent us from automatically reaching for it as a solution during times of panic.

A stumbling block, however, is not an outright barricade. Currently, San Francisco is under a shelter-in-place mandate; as of April 6, it had a reported 583 confirmed cases and nine deaths. If the situation worsens, could organizers suggest that the city make an exception and use facial recognition tracking to flatten the curve, just this once? It’s a long-shot hypothetical, but it does lead us to wonder what could happen if we allow circumstances to convince us into surveillance culture, one small step at a time.

Bans can only do so much. While the San Francisco ruling proves that Scalzo’s claim that “Laws have to determine what’s legal, but you can’t ban technology” isn’t strictly speaking correct, the sentiment behind it remains. Circumstances can compel us into considering privacy-eroding tech even as those explorations lead us down a path to dystopia.

So, in a way, Scalzo is right; the proliferation of facial recognition technology is inevitable. But that doesn’t mean that we should give up on bans and protective measures. Instead, we should pursue them further and slow the momentum as much as we can — if only to give ourselves time to establish regulations, rules, and protections. We can’t give in to short-term thinking; we can’t start down the slippery slope towards surveillance culture without considering the potential consequences. Otherwise, we may well find that the “cure” that facial recognition promises is, in the long term, far worse than any short-term panic.

Originally published on Hackernoon.com

By |2020-06-12T21:03:05+00:00June 12th, 2020|Business, Current Events, Technology|

In the Digital Age, Companies Need a Human Touch

Today, a customer’s entire experience with a company — from first inquiry to final transaction — can and often does occur entirely online. Many consumers seem to prefer it that way, too. According to data collected by Nextiva, 57% of surveyed respondents said that they would rather contact companies via email or social media, instead of by voice-based customer support.

The shift to a tech-savvy business foundation isn’t just convenient for consumers — it comes with considerable benefits for businesses too. In one report published by Juniper Research, analysts projected that automated systems would save a collective $8 billion in customer support costs by 2022. That’s a compelling financial argument for businesses. Smart Insights estimates that 34% of companies have already undergone a digital transformation, while researchers for Seagate anticipate that over 60% of CEOs globally will begin prioritizing digital strategies to improve customer experience by the end of this year.

Integrating technology into our day-to-day business operations is a no-brainer, given its potential to lessen costs, boost convenience, and improve consumer experiences. However, in our race to meet the digital age, we may be leaving one critical aspect of customer service behind: human connection.

As much as consumers appreciate the convenience and speed that digital tools and systems facilitate, they also need to feel a genuine human connection and know that there are people behind the AI-powered customer hotline. As one writer puts the matter in an article for Business Insider, “A satisfactory customer experience depends on how well a company can relate to a customer on an emotional level. To create memorable experiences, employees who are curious and have a genuine desire to assist the customers can set brands apart.”

Business consultant Chris Malone and social psychologist Susan T. Fiske researched this emotional connection for their book, The Human Brand: How We Relate to People, Products, and Companies. In the text, they write that consumers gravitate towards companies similarly to how they flock to friends; if they perceive a business as being emotionally warm and welcoming, but not particularly competent, they will still enjoy the experience and like the brand. In contrast, if a company is competent but cold in its customer engagement, consumers tend to visit only when circumstances demand it.

The ideal, they say, is for companies to be both warm and competent. Within the context of our digitally-driven world, striking that balance means integrating consumer service technology without wholly excising human personality.

Businesses need to identify when their AI-powered chatbot or customer service channel crosses over the line from useful to canned or frustrating. Sometimes, a robot voice just isn’t helpful enough; according to statistics published by American Express, 40% of customers prefer talking to a human service representative when they struggle with complex problems. Consumers should have the means to reach out to human representatives if they can’t solve their problems through automated channels.

People want to connect with a brand that has personality, voice, and empathy — even across digital channels. Social media platforms have evolved into significant touchpoints for businesses today; researchers for Nextiva estimate that over 35% of consumers in the U.S. reached out to a business through social media in 2017. Their expectations, too, are significant — 48% of the customers who contact a company via social media expect a response to their questions or complaints within 24 hours. Even so, a cold, automated response may be just as ineffective as no response at all.

Emerging technology is not a be-all, end-all, unquestionable solution to our problems. Businesses need to treat digital channels with all the personality, empathy, and care that they would offer during a client call. If they rely on canned responses or AI bots, they may find their consumer pool shrinking as customers defect for companies with more perceived warmth.

Technology is convenient, yes — however, the convenience it creates should never come at the cost of human connection.

Originally published on ScoreNYC

By |2020-06-12T21:07:27+00:00November 26th, 2019|Uncategorized|

Soon, a “Smart Home” Will Just Be a Home

As the line dividing the internet and the physical world blurs in ever-increasing ways, it shouldn’t be a surprise that online amenities have arrived in the modern home. The growth of smart homes is predicted to increase massively over the next few years, and it’s not hard to see why. They offer convenience and a modern sheen to home living, but more importantly a high-tech layer of security that empowers homeowners to better keep their dwellings and family members safe.

The pitch is a compelling one to homeowners, as well as to investors. According to statistics provided by Statista, analysts anticipate that revenue in the smart home market will grow 15.43% year-over-year. Household penetration currently stands at 27.5% and is further projected to hit 47.4% by 2023. Smart homes are undoubtedly popular; for investors, the growing market could prove lucrative.
Here’s why homeowners are flocking towards smart home technology — and why tech-savvy real estate investors should take notice of the increasing consumer interest.

Staying guarded through tech

The most vulnerable point for most homes is the most common point of entry: the front door. Experts estimate that over a third of burglaries result from unlocked or unsecured front doors, meaning a safely locked entryway can be among the best deterrents from intruders. Smart locks that are activated and deactivated remotely via your home wifi leave homeowners secure in the knowledge that their homes are safely protected while they’re not there. Security-enabled apps like Nest can monitor the status of all entryways, meaning front or side doors can be unlocked for trusted guests or service workers while you’re at work or on vacation. Alerts to your phone can let you know if doors have been breached, meaning you’ll know the instant your home security company does that there’s been a break-in. While this won’t replace being actually there to survey the trouble, it provides some peace of mind to know your home tech is keeping you apprised of all that’s happening while you’re out of reach.

Danger alerts at the speed of WiFi

Crime isn’t the only major danger that smart tech can help homeowners face. The danger of house fires hasn’t been eliminated with technology, but cutting-edge smoke detectors offer a level of security that can only be found when including the most modern safety features. Photoelectric sensors can identify fires by type, catching even smoldering fires with little flame sooner than traditional detectors can. Linked to a smart home sound system, a smoke detector can even use voice notifications to alert you, over home speakers, where the fire is centered and how best to get out. In a situation where split-second decisions can prove life-changing, smart tech is a powerful safeguard for homeowners and their families.

Words of warning

Of course, when it comes to security, smart home tech presents one brand-new vulnerability that homeowners of the past never had to worry about. It may sound odd to consider, but the threat of home hacking is a real danger in a world where locks, smoke alarms, and other fixtures are all internet-enabled. The cat burglar of today may scope out his victims with a laptop or smartphone in hand, ready to attack with malicious software designed to disable home security or just harass and annoy homeowners by disabling appliances and lights.

Fortunately, safeguards against smart home hacking are similar to the ones we already take while online. Expert studies of security flaws found some fixes that ought to be familiar to anyone used to performing a basic cybersecurity routine. Two-factor authentication, strong passwords, and keeping up with regular security updates can keep smart home tech safe from malicious forces both online and in person. While most of us are probably new to downloading security updates to our door locks, the benefits of smarter control over home safety easily outweigh such a relatively minor inconvenience.

Convenience and novelty aren’t the only reasons smart homes have become attractive to buyers in the past decade. The above security features empower homeowners today to take greater control over the sanctity of their property, even when they’re thousands of miles away. For keeping your possessions, your home, and your family safe, smart homes present the next step in control over what happens to our homes. While this new opportunity does admittedly create its own new challenges, the benefits should entice anyone looking to fortify their castle, no matter what size. In the future, we can certainly expect homeowner buy-in — and investor interest — to grow.

Originally published on Medium

By |2020-06-12T21:07:37+00:00November 6th, 2019|Technology|

Cable is Dead, Long Live (Streaming) Cable

It’s no secret that cable is on its way out. Ever since Netflix’s sparked an explosion of public interest in streaming entertainment with its 2013 series hit House of Cards, traditional channels of access — cable, satellite, dish — have been rendered all but obsolete.

According to reports published by Leichtman Research Group, a firm that centers its research and analysis in the media and entertainment sectors, the six most popular cable companies lost a whopping 910,000 video subscribers in 2018. Satellite TV and DirectTV services fared even worse — analysts estimated that the former lost around 2,360,000 subscribers and the latter 1,236,000 that same year. The sharp decline isn’t new, either; LRG researchers believe that the user base for traditional services has sunk by nearly ten million since the first quarter of 2012. 

Streaming is slowly outmoding cable — except, of course, in cases where cable has managed to latch onto streaming itself. Interestingly, cable’s primary source of subscription growth has been via virtual MVPDs (vMVPDs), or services that offer a bundle of television channels through the internet without providing traditional data transport infrastructure. LRG analysts estimate that roughly four million subscribers have signed on for vMPVD services such as PlayStation Vue, YouTube TV, and Hulu Live. But these services seem more like a speedbump on cable’s decline than an actual stop, a gateway service to help longtime cable enthusiasts transition into a streaming norm. 

Streaming entertainment is the new normal, and any millennial could build a compelling case for why the change is a good one. After all, why would you pay for expensive cable bundles and struggle with limited viewing schedules when you can see your favorite shows and movies on Netflix or Hulu for less than $15 per month? Streaming offers original content at a reasonable price point and — unlike cable — is accessible from wherever an internet connection is available. It’s so popular that new streaming services have begun popping up like weeds. Apple TV+ goes online on November 1st, Disney+ opens for registration in November, and NBC’s Peacock is set to go live sometime in 2020. 

Cable is dying. But will streaming, the reason behind cable’s slow extinction, one day face the same decline? 

Cable is Dead, Long Live (Streaming) Cable

As it turns out, the streaming coup we see today may be just another remix of the same old industry song. 

Consider the now-giant HBO’s humble roots as an example. The service was arguably the first network to offer premium cable and ask viewers to pay a subscription fee — and it launched its experiment in the town of Wilkes-Barre Pennsylvania shortly after Hurricane Agnes hit the area in 1972. The initiative had a rocky start, reportedly losing nearly $9,000 per month as it struggled to lay cable and pay for a microwave link to transmit entertainment offerings from New York City. But the project ultimately paid off in spades, heralding a new era for paid cable television. 

Cable television was new, convenient, and engaging. Its subscribers could view new and exciting content that wasn’t limited by the profanity and nudity guidelines imposed on basic cable programs. Eventually, cable providers began offering bundles to aggregate channels and make accessing paid content easy, convenient, and affordable.  

Sound familiar yet? 

Today, streaming entertainment services offer the same convenience, aggregation, and affordability that characterized cable — but better. Importantly, they also provide channel subscriptions a la carte, a move which cable companies tended to avoid out of concern that it would negatively impact subscription numbers

When giants such as Netflix, Hulu, and Amazon Prime claimed dominance over the market, streaming seemed like the answer to all of cable subscribers’ problems. However, as more niche entertainment stream providers enter the field, we appear to be falling back into cable’s old woes. 

Today, viewers have over 300 streaming video services to choose from, each with their own subscription price. Many host original content, knowing that high-quality and exclusive offerings attract subscribers. According to one recent study from Deloitte, 57% of paid streaming users — and 71% of millennial users — report subscribing to access original content. However, users’ willingness to pay for content has its limits. As Deloitte’s researchers put the matter: “nearly one-half (47 percent) are frustrated by the growing number of subscriptions and services they need to piece together to watch what they want. Forty-eight percent say it’s harder to find the content they want to watch when it is spread across multiple services.”

Consumers don’t want to make a patchwork out of their streaming services to get the content they want. The fragmentation and consumer difficulty we face now is likely to intensify, given the sheer number of high-profile streaming platforms set to launch soon. As a result, talk of using bundling as a solution to subscriber frustrations has returned; according to IndieWire, WarnerMedia is reportedly aiming to launch a streaming platform that would bundle HBO, Cinemax, and some Warner Bros. content into one service. It would have a higher price point, too — $16-$17 per month. It seems only fair to expect prices to creep up further as other, competing bundles undergo discussion.  

Digital streaming is, without question, more convenient and better-suited to audience needs for affordable original content than paid cable. Streaming’s coup is a well-deserved one. However, it seems naive to think that the problems consumers complained of with cable — higher prices, annoying bundles — won’t appear as time goes on. 

Cable is dead. Long live (streaming) cable.

By |2022-04-12T19:30:07+00:00October 15th, 2019|Culture, Technology|

AI Fails and What They Teach Us About Emerging Technology

These days, we’ve become all but desensitized to the miraculous convenience of AI. We’re not surprised when we open Netflix to find feeds immediately and perfectly tailored to our tastes, and we’re not taken aback when Facebook’s facial recognition tech picks our face out of a group-picture lineup. Ten years ago, we might have made a polite excuse and beat a quick retreat if we heard a friend asking an invisible person to dim the lights or report the weather. Now, we barely blink — and perhaps wonder if we should get an Echo Dot, too. 

We have become so accustomed to AI quietly incorporating itself into almost every aspect of our day-to-day lives that we’ve stopped having hard walls on our perception of possibility. Rather than address new claims of AI’s capabilities with disbelief, we regard it with interested surprise and think — could I use that? 

But what happens when AI doesn’t work as well as we expect? What happens when our near-boundless faith in AI’s usefulness is misplaced, and the high-tech tools we’ve begun to rely on start cracking under the weight of the responsibilities we delegate? 

Let’s consider an example.

AI Can’t Cure Cancer — Or Can It? An IBM Case Study 

When IBM’s Watson debuted in 2014, it charmed investors, consumers, and tech aficionados alike. Proponents boasted that Watson’s information-gathering capabilities would make it an invaluable resource for doctors who might otherwise not have the time or opportunity to keep up with the constant influx of medical knowledge. During a demo that same year, Watson dazzled industry professionals and investors by analyzing an eclectic collection of symptoms and offering a series of potential diagnoses, each ranked by the system’s confidence and linked to relevant medical literature. The AI’s clear knowledge of rare disease and its ability to provide diagnostic conclusions was both impressive and inspiring. 

Watson’s positive impression spurred investment. Encouraged by the AI’s potential, MD Anderson, a cancer center within the University of Texas, signed a multi-million dollar contract with IBM to apply Watson’s cognitive computing capabilities to its fight against cancer. Watson for Oncology was meant to parse enormous quantities of case data and provide novel insights that would help doctors provide better and more effective care to cancer patients. 

Unfortunately, the tool didn’t exactly deliver on its marketing pitch. 

In 2017, auditors at the University of Texas submitted a caustic report claiming that Watson not only cost MD Anderson over $62 million but also failed to achieve its goals. Doctors lambasted the tool for its propensity to give bad advice; in one memorable case reported by the Verge, the AI suggested that a patient with severe bleeding receive a drug that would worsen their condition. Luckily the patient was hypothetical, and no real people were hurt; however, users were still understandably annoyed by Watson’s apparent ineptitude. As one particularly scathing doctor said in a report for IBM, “This product is a piece of s—. We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.”

But is the project’s failure to deliver on its hype all Watson’s fault? Not exactly. 

Watson’s main flaw was with implementation, not technology. When the project began, doctors entered real patient data as intended. However, Watson’s guidelines changed often enough that updating those cases became a chore; soon, users switched to hypothetical examples. This meant that Watson could only make suggestions based on the treatment preferences and information provided by a few doctors, rather than the actual data from an entire cancer center, thereby skewing the advice it provided. 

Moreover, the AI’s ability to discern connections is only useful to a point. It can note a pattern between a patient with a given illness, their condition, and the medications prescribed, but any conclusions drawn from such analysis would be tenuous at best. The AI cannot definitively determine whether a link is correlation, causation, or mere coincidence — and thus risks providing diagnostic conclusions without evidence-based backing.

Given the lack of user support and the shortage of real information, is it any surprise that Watson failed to deliver innovative answers? 

What Does Watson’s Failure Teach Us?

Watson’s problem is more human than it is technical. There are three major lessons that we can pull from the AI’s crash: 

We Need to Check Our Expectations.

We tend to believe that AI and emerging technology can achieve what its developers say that it can. However, as Watson’s inability to separate correlation and causation demonstrates, the potential we read in marketing copy can be overinflated. As users, we need to have a better understanding and skepticism of emerging technology before we begin relying on it. 

Tools Must Be Well-Integrated. 

If doctors had been able to use the Watson interface without continually needing to revise their submissions for new guidelines, they might have provided more real patient information and used the tool more often than they did. This, in turn, may have allowed Watson to be more effective in the role it was assigned. Considering the needs of the human user is just as important as considering the technical requirements of the tool (if not more so). 

We Must Be Careful

If the scientists at MD Anderson hadn’t been so careful, or if they had followed Watson blindly, real patients could have been at risk. We can never allow our faith in an emerging tool to be so inflated that we lose sight of the people it’s meant to help. 

Emerging technology is exciting, yes — but we also need to take the time to address the moral and practical implications of how we bring that seemingly-capable technology into our lives. At the very least, it would seem wise to be a little more skeptical in our faith. 

By |2019-09-03T16:44:02+00:00September 3rd, 2019|Uncategorized|

How AR and VR Could Change Tourism in New York

Tourist itineraries in New York City are predictable enough to be b-roll cliche. Tourists are easy enough to spot: they move in flocks through Central Park, take selfies at the Statue of Liberty, stare in awe from their slow-moving tour buses at the Empire State Building, and — of course — purchase “I Heart NYC” t-shirts from overpriced carts. The New York that visitors enjoy is predictable, yes, but also vivid, exciting, and well-packed with familiar landmarks; each new day offers wide-eyed tourists the chance to experience famous sights firsthand.

But what if the tourism experience could span more than a well-walked map of landmarks? What if visitors could peel back the cliches of New York’s touristy exterior and delve into its rich history? Augmented and virtual reality technologies may provide a means to do just that, revolutionizing the way visitors experience both the city and its history.

VR and AR’s entry into the tourism sector isn’t all that surprising, given its growing popularity. Analysts for Goldman Sachs estimate that the market for both will overtake $1.6 billion by 2025. Figures from Statista further indicate that as of 2018, 117 million people worldwide were active VR users — a notable leap from four years before, when only 200,000 actively used the technology. Both AR and VR are well-known for their ability to create immersive digital experiences; they empower consumers to delve into their favorite fantasy gaming worlds, experience movies in near-overwhelming sensory experiences, and even virtually “trial” products before buying them in a brick-and-mortar store. With tourism, virtual- and augmented reality technologies promise to add another layer of immersion to an industry that already centers on creating memorable experiences.

VR Expands Tourism Possibilities

Every pre-planned walk or guided bus tour has its limits. Tourists can’t duck under the metaphorical velvet rope to explore their favorite attractions; they have to stay within set, guide-approved bounds. With VR, those limitations are less constricting, offering virtual access to the tourist without compromising the security of the site itself.

As Dr. Nigel Jones, a senior lecturer in information systems at Cardiff Metropolitan University noted for a recent article for the BBC, VR provides “something that’s more tangible to the [tourist]. They can see where they’re going to go, see what’s happening in that location […] The other advantage is to give people an experience that they can’t do. You could take them to a place that’s off limits — like a dungeon in a castle.”

New York City might be running low on castles, but it certainly has no shortage of historic attractions and digitally-explorable landscapes. Consider Governor’s Island, a popular tourist hotspot that sits just East of the Statue of Liberty. Today, the island encompasses several historical sites and a national park — but centuries ago, it was a seasonal fishing spot for Native Americans and an outpost for English and Dutch settlers. The island’s history is rich — and relatively inaccessible for most tourists. However, recent AR innovations have begun to allow tourists to walk through history as they traverse the island.

Inventing America is one such tourist-centered tool. Made publicly available in 2018, Inventing America uses an AR-powered app to transport visitors into a 17th-century, post-colonial version of Governor’s Island. The app provides users with the opportunity to delve into storylines, characters, and history even as they explore the real-life Governor’s Island on foot. Experiences in the app are inextricably tied to physical exploration, ensuring that the AR game complements and supports, rather than replaces, a tourist’s real-world experience on the island.

Of course, not all VR- and AR innovations are quite so based in game and narrative. Others, like the New York City-based tour provider The RIDE, use VR and AR experiences to provide tourists with more information as they drive past popular city hotspots. The RIDE melds traditional tour bus routes with augmented reality technology; each of its buses sport 40 LCD TVs, surround sound, and LED lights. This structure, the company notes, allows facilitators to provide “deeply researched audio/visual support conveying the history and growth of Manhattan” during their tours, thereby superimposing a tech-powered view of a past New York onto the view tourists see beyond the bus’s windows.

Emerging virtual tools promise to add all-new layers to New York’s tourism experience, sweeping away the tired tropes of tourist cliches — and we will be all the better for it.

By |2019-07-15T20:51:17+00:00May 30th, 2019|Culture, Technology|
Go to Top