Connect with us

Startups

How Huawei plans to win the Western Markets

Published

on

Some Chinese high-tech companies may be bigger than you think. The e-commerce giant Alibaba has a market capitalization of over $400 billion. The social media and gaming company Tencent is not far behind, and nearly a billion people use its WeChat messaging service. Baidu is the world’s second largest search engine, and is increasingly strong in the key sector of artificial intelligence. Despite their size, these companies are largely invisible in the West because their massive successes are almost entirely restricted to China.

That’s partly because they offer software and services, neither of which travel particularly well thanks to the cultural baggage they bring with them. Chief among those is that the Chinese government has access to all of a company’s user data, and can impose any restrictions that it wishes on the use of software and services, as this blog reported earlier this year. More recently, Alibaba was instructed to remove unauthorized VPNs from its Taobao e-commerce platform. These are not aspects that are likely to endear Chinese software and services companies to Western users worried about privacy and censorship.

But there is another Chinese IT giant – Huawei – still relatively unfamiliar in the West, that is having far more luck in selling its products into markets outside China. It has achieved that because it is a company that produces hardware based on international standards, and largely running open source software. As well as the general benefits of adopting open standards and open source, this approach may also be an attempt to allay earlier fears that Huawei hardware might contain backdoors available to the Chinese government.

In the West, Huawei is probably best known for its mobile phones. Recent market research suggests that it has overtaken Apple as the world’s second-biggest smartphone manufacturer by sales after Samsung, with particular success in Europe. However, for several decades after its founding in 1987, its main product line was telecoms equipment. A measure of its success is that in 2012, it overtook Ericsson as the world’s largest telecommunications equipment manufacturer.

Huawei today employs 180,000 people, many of whom hold shares in the company, which is still privately held despite its size. Last year, its revenue was around $75 billion, with a profit of $7 billion. In 2016, approximately 80,000 employees were engaged in R&D, comprising 45% of its total workforce. Huawei’s R&D expenditures that year were around $10 billion.

The fruits of that investment were revealed at Huawei Connect 2017, its massive annual conference that this year saw 20,000 participants from over 150 countries, and which I attended last week (disclosure: Huawei paid for my travel costs). As the conference motto “Grow with the cloud” underlined, Huawei is placing public and private clouds at the heart of its strategy.

According to one of Huawei’s “rotating CEOs“, Huawei aims to be a key player in one of the five global cloud systems it predicts will coalesce, rather as airline alliances have created three main global carrier groups. Huawei placed great emphasis on what it called the “intelligent cloud”, which runs artificial intelligence software on the cloud platform. Specifically, at its conference the company launched what it called “the industry’s first all-cloud, network-wide smart video cloud solution.” This, it said, “provides a strong computing engine that supports public safety video application services and accelerates video application innovation to help public safety organizations better serve and protect citizens.”

Such “smart video” capabilities form an important component of a larger concept, the “smart city“, which is now one of the hottest marketing buzzwords in the high-tech world, along with its variant, the “safe city”. A brochure available during the Huawei Connect conference entitled “The Road to Collaborative Public Safety” defines three aims of the safe city: being able to detect threats as they emerge; being able to collect, share and analyze city data; and allowing the authorities to identify threats and then act in real-time. Huawei’s brochure says that there are already more than 100 safe city implementations using its products in 30 countries, covering 400 million people.

A key element of Huawei’s safe city system is “intelligent video surveillance.” This offers scene search in order to track particular elements in the video feeds, and video synopsis, which can summarize hours of surveillance videos into key clips for human analysis. Other features include “entity recognition”, behavior analysis and crowd counting. Extra features that can be added go beyond video surveillance to include data from Internet of Things devices to detect chemical, biological, radiological and nuclear material, radar and electro-optics, and monitoring of social media feeds. According to Huawei’s text:

“Public safety is more than current safe city. It is about preventing and solving crimes, reducing loss of life and property. Public safety is also about minimizing disruption to life. Public safety is beyond detection and response; it includes prevention and bringing life to normalcy. It encompasses digital security, health security, infrastructure safe and personal safety.”

As that hints, this includes predictive policing, or “PredPol” as the brochure terms it, which “involves analysis of data to predict the next crime, with the objective of preventing it.”

The ideas and technology behind the “safe city” sound troubling, not least from a privacy viewpoint. But in truth, much of this is already happening in the West. For example, CCTV cameras are routinely keeping tabs on our every movement, especially in countries like the UK, which has millions of the systems in place. As this blog has reported, facial recognition systems are also being used in the UK and elsewhere. The only difference between this and what Huawei offers with its safe city systems is that the latter is completed integrated and probably works rather better. Indeed, it’s easy to see Western governments that already carry out mass surveillance of their citizens acquiring Huawei’s products in order to upgrade their snooping capabilities.

The problem is not so much with Huawei’s application of powerful cloud and AI technologies to surveillance, but the bargain it implies – the bargain that we have all, to varying degrees, accepted. The deal is that if we allow the government to watch our every move, it will keep us safe from all those lurking dangers in the modern, uncertain world. Politicians everywhere shamelessly play on our fears to justify intrusive surveillance laws. So it should come as no surprise that many people are happy with the roll-out of CCTVs or suggestions that end-to-end encryption should be banned – after all, if you are a law-abiding citizen, you have nothing to hide, right?

In China, government surveillance is baked in to every online service, not just in safe cities. But again, the situation outside China is not that different: everything we do on Google or Facebook is tracked and analyzed for the purpose of selling advertising. As we now know from Snowden’s leaks, under the Prism program, the US government taps into that commercial surveillance data to gather intelligence. So the only difference between China and the West is that the former does not attempt to hide the fact that it spies on its citizens, while the latter tries to deny it. Similarly, Huawei has no problem openly offering its new AI-enhanced cloud-based surveillance systems, while its Western rivals are doubtless doing the same, but keeping quiet about it. The real issue is our meek acquiescence in the continual roll-out of privacy-harming technology by both governments and companies everywhere.

___________________________________________________________________

About the Author

This article was written by Glyn Moody of Privacy News Online.

Startups

The Most Important Tech Job that Doesn’t Exist

Published

on

Yesterday I asked a prominent VC a question:

“Why is it that, despite the fact that so many successful startup ideas come from academic research, on the investment side there doesn’t seem to be anyone vetting companies on the basis of whether or not what they’re doing is consistent with the relevant research and best practices from academia?”

His response was that, unlike with startups in other sectors (e.g. biotech, cleantech, etc.), most tech startups don’t come out of academia, but rather are created to fill an unmet need in the marketplace. And that neither he nor many of his colleagues spent much time talking with academics for this reason.

This seems to be the standard thinking across the industry right now. But despite having nothing but respect for this investor, I think the party line here is unequivocally wrong.

Let’s start with the notion that most tech startups don’t come out of academia. While this may be true if you consider only the one-sentence pitch, once you look at the actual design and implementation choices these startups are making there is typically quite a lot to work with.

For example, there is a startup I recently looked at that works to match mentors with mentees. Though one might not be aware of it, there is actually a wealth of research into best practices:

  • What factors should be used when matching mentors with mentees?
  • How should the relationship between the mentor and mentee be structured?
  • What kind of training, if any, should be given to the participants?

That’s not to say that a startup that’s doing something outside the research, or even contraindicated by the research, is in any way suspect. But it does raise some questions: Does the startup have a good reason for what they’re doing? Are they aware of the relevant research? Is there something they know that we don’t?

If the entrepreneurs have good answers to these questions then it’s all the more reason to take them seriously. But if they don’t then this should raise a few red flags. And it’s not only niche startups in wonky areas where this is an issue.

For example, I rarely post to Facebook anymore, but people who follow me can still get a good idea of what I’m up to. Why? Because Facebook leverages the idea of behavioral residue to figure out what I’m doing (and let my friends know) without me having to explicitly post updates. It does this by using both interior behavioral residue, e.g. what I’m reading and clicking on within the site, and exterior behavioral residue, e.g. photos of me taken outside of Facebook.

To understand why leveraging behavioral residue is so important for social networks, consider that of people who visit the typical website only about 10% will make an account. Of those about 10% will make at least one content contribution, and of those about 10% will become core contributors. So if you consider your typical user with a couple hundred friends, this translates into seeing content from only a tiny handful of other people on a regular basis.

In contrast with Facebook, one of the reason why FourSquare has yet to succeed is due to significant problems with their initial design decisions:

  • The only content on the site comes from users who manually check into locations and post updates. This means that of my 150 or so friends, I’m only seeing what one or two of them are actually doing, so what’s the value?
  • The heavy use of extrinsic motivation (e.g. badges) has been shown time and again that extrinsic motivation undermines intrinsic motivation.

The latter especially is a good example of why investing on traction alone is problematic: many startups that leverage extrinsic rewards are able to get a good amount of initial traction, but almost none of them are able to retain users or cross the chasm into the mainstream. Why isn’t it anyone’s job to know this, even though the research is readily available for any who wants to read it? And why is it so hard to go to any major startup event without seeing VCs showering money on these sorts of startups that are so contraindicated by the research that they have almost no realistic chance of succeeding?

This same critique of investors applies equally to the startups themselves. You probably wouldn’t hire an attorney who wasn’t willing to familiarize himself with the relevant case law before going to court. So why is it that the vast majority of people hired as community managers and growth marketers have never read Robert Kraut? And the vast majority of people hired to create mobile apps have never heard of Mizuko Ito?

A lot of people associate the word design with fonts, colors, and graphics, but what the word actually means is fate — in the most existential sense of the word. That is, good design literally makes it inevitable that the user will take certain actions and have certain subjective experiences. While good UX and graphic design are essential, they’re only valuable to the extent that the person doing them knows how to create an authentic connection with the users and elicit specific emotional and social outcomes. So why are we hiring designers mainly on their Photoshop skills and maybe knowing a few tricks for optimizing conversions on landing pages? What a waste.

Of all the social sciences, the following seem to be disproportionately valuable in terms of creating and evaluating startups:

  • Psychology / Social Psychology
  • Internet Psychology / Computer Mediated Communication
  • Cognitive Development / Early Childhood Education
  • Organizational Behavior
  • Sociology
  • Education Research
  • Behavioral Economics

And yet not only is no one hiring for this, but having expertise in these areas likely won’t even get you so much as a nominal bonus. I realize that traction and team will always be the two biggest factors in determining which startups get funded, but have we really become so myopic as to place zero value on knowing whether or not a startup is congruent or contraindicated by the last 80+ years of research?

So should you invest in (or work for) the startup that sends text messages to people reminding them to take their medicine? How about the one that lets you hire temp laborers using cell phones? Or the app for club owners that purports to increase the amount of money spent on drinks? In each of these cases there is a wealth of relevant literature that can be used to help figure out whether or not the founders have done their homework and how likely they are to succeed. And it seems like if you don’t have someone whose willing to invest a few hours to read the literature then you’re playing with a significant handicap.

Investors often wait months before investing in order to let a little more information surface, during which time the valuation can (and often does) increase by literally millions. Given that the cost of doing the extra research for each deal would be nominal in the grand scheme of things, and given the fact that this research can benefit not only the investors but also the portfolio companies themselves, does it really make sense to be so confident that there’s nothing of value here?

What makes the web special is that it’s not just a technology or a place, but a set of values. That’s what we were all originally so excited about. But as startups become more and more prosaic, these values are largely becoming lost. As Howard Rheingold once said, “The ‘killer app’ of tomorrow won’t be software or hardware devices, but the social practices they make possible.” You can’t step in the same river twice, but I think there’s something to be said for startups that make possible truly novel and valuable social practices, and for creating a larger ecosystem that enables them.

___________________________________________________________________

About the Author

This article was written by Alex Krupp. see more.

Continue Reading

Entrepreneurship

How Google’s AI Mastered All Chess Knowledge in Just 4 Hours

Published

on

Chess isn’t an easy game, by human standards. But for an artificial intelligence powered by a formidable, almost alien mindset, the trivial diversion can be mastered in a few spare hours.

In a new paper, Google researchers detail how their latest AI evolution, AlphaZero, developed “superhuman performance” in chess, taking just four hours to learn the rules before obliterating the world champion chess program, Stockfish.

In other words, all of humanity’s chess knowledge – and beyond – was absorbed and surpassed by an AI in about as long as it takes to drive from New York City to Washington, DC.

After being programmed with only the rules of chess (no strategies), in just four hours AlphaZero had mastered the game to the extent it was able to best the highest-rated chess-playing program Stockfish.

In a series of 100 games against Stockfish, AlphaZero won 25 games while playing as white (with first mover advantage), and picked up three games playing as black. The rest of the contests were draws, with Stockfish recording no wins and AlphaZero no losses.

“We now know who our new overlord is,” said chess researcher David Kramaley, the CEO of chess science website Chessable.

“It will no doubt revolutionise the game, but think about how this could be applied outside chess. This algorithm could run cities, continents, universes.”

Developed by Google’s DeepMind AI lab, AlphaZero is a tweaked, more generic version of AlphaGo Zero, which specialises in playing the Chinese board game, Go.

DeepMind has been refining this AI for years, in the process besting a series of human champions who fell like dominoes before the indomitable, “Godlike” neural network.

That victory streak culminated in a startling success in October, in which a new fully autonomous version of the AI – which only learns by playing itself, never facing humans – bested all its former incarnations.

By contrast, AlphaGo Zero’s predecessors partly learned how to play the game by watching moves made by human players.

That effort was intended to assist the fledgling AI in learning strategy, but it seems it may have actually been a handicap, since AlphaGo Zero’s fully self-reliant learning proved devastatingly more effective in one-on-one competition.

“It’s like an alien civilisation inventing its own mathematics,” computer scientist Nick Hynes from MIT told Gizmodo in October.

“What we’re seeing here is a model free from human bias and presuppositions. It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same.”

But things are moving so fast in this field that already the October accomplishment may have been outmoded.

In their new paper, the team outlines how the very latest AlphaZero AI takes the self-playing reliance – called reinforcement learning – and applies it with a much more generalised streak that gives it a broader focus to problem solving.

That broader focus means AlphaZero doesn’t just play chess. It also plays Shogi (aka Japanese chess) and Go too – and, perhaps unsurprisingly, it only took two and eight hours respectively to master those games as well.

For now, Google and DeepMind’s computer scientists aren’t commenting publicly on the new research, which hasn’t as yet been peer-reviewed.

But from what we can tell so far, this algorithm’s dizzying ascent to the pinnacle of artificial intelligence is far from over, and even chess grandmasters are bewildered by the spectacle before them.

“I always wondered how it would be if a superior species landed on Earth and showed us how they played chess,” grandmaster Peter Heine Nielsen told the BBC.

“Now I know.”

____________________________________________________________

About the Author 

This article was produced by Grendz. Grendz is the definitive place for new mind-blowing technology trends, science breakthroughs and green and positive ideas and news. Sign up is Free and special services are available. see more.

Continue Reading

Trending