Source: Reuters: Technology News | 16 Aug 2018 | 1:28 am
While the company stopped short of connecting the “bad actors” behind those pages to Russia when announcing the removal on July 31, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) says that the accounts were “most probably” being run by a successor to the Russian operation that attempted to influence the 2016 election.
Facebook did identify the pages as part of a political influence campaign being operated ahead of the U.S. midterms. The new analysis, which TIME reviewed, picks apart pages in the campaign, outlining tactics that are “correlated with the behavior patterns and content we saw from the infamous St. Petersburg troll factory,” says the DRFLab’s Graham Brookie. Chief among those tactics, he says, was an attempt to amplify existing political discord in the United States. The pages posted content related to topics such as race, feminism and immigration.
The DFRLab, a non-partisan center established in 2016 within the Atlantic Council, a foreign policy think tank, has partnered with Facebook to analyze abuse on the platform. In the analysis, its experts note that the pages in the latest campaign not only had direct contact with accounts previously identified as part of the Russian Internet Research Agency but shared similar content and made similar grammatical errors in posts.
In one instance, a Facebook account (@warriorsofaztlan) had a very similar name to an account that Twitter identified as part of the Russian troll farm (@warriors_aztlan). Both were set up in the same month in 2017 and shared similar content related to “white crimes against Native Americans,” according to the analysis.
Though many posts shared by the accounts contained content that had been taken from elsewhere on the Internet — a technique that may help trolls better hide their identities — some original writing contained errors a native English speaker would be unlikely to make. One post referenced “a needed and important humans,” while another contained the muddled phrase “Since the beginning of the times.”
Most alarming, Brookie says, is that the latest campaign appeared to double down on the technique of turning online political debate into real-world protests. In several instances, a page titled “Resisters” created anti-Trump events and engaged legitimate political organizations in helping to promote them among thousands of social media users. The events had titles such as “Stop Ripping Families Apart,” an apparent reference to Trump’s now-defunct family separation policy, and “The Trump Nightmare Must End.” The inauthentic administrators attempted to engage users on a range of issues, including the transgender military ban, white supremacy and rape culture.
Facebook has said its own investigation into who is behind the campaign is ongoing. The DFRLab’s analysis is independent of work the tech company is doing to track disinformation agents on its platform, though Facebook is in contact with the organization and has dedicated funding to the DFRLab’s work on elections.
Brookie emphasizes that the analysis does not amount to “a hard ‘yes,'” in terms of whether the pages are definitely connected to Russia. The similarities could be a coincidence or the work of bad actors copying the techniques that Russian agents used.
“It’s nearly impossible to say with 100% degree of confidence that this was a Russian intelligence operation,” Brookie says, “but what we can say is it looked and acted much like Russian influence operations that we’ve seen before.”
If it is the same troll farm, both Facebook and the DFRLab say it’s doing a better job of covering its tracks than in years past, perhaps adapting to information that tech companies have shared about how they have tracked such operations. “They’re not paying for political ads in roubles anymore,” Brookie says.
Publishing a detailed analysis of the latest campaign, Brookie acknowledges, might serve as a kind of “manual to heighten their operation security.” But he says that raising awareness around the tactics that trolls are using to exacerbate political tensions, and to draw people into the street, is crucial in fighting disinformation.
“We create more resilience,” he says, “when people are more aware.”
Source: Tech – TIME | 16 Aug 2018 | 1:00 am
Source: Reuters: Technology News | 16 Aug 2018 | 12:46 am
Source: Reuters: Technology News | 16 Aug 2018 | 12:19 am
Source: Reuters: Technology News | 16 Aug 2018 | 12:18 am
Source: Reuters: Technology News | 15 Aug 2018 | 11:59 pm
Source: Reuters: Technology News | 15 Aug 2018 | 10:12 pm
Source: Reuters: Technology News | 15 Aug 2018 | 9:55 pm
Source: Reuters: Technology News | 15 Aug 2018 | 6:41 pm
Source: Reuters: Technology News | 15 Aug 2018 | 6:31 pm
Source: Reuters: Technology News | 15 Aug 2018 | 5:00 pm
Source: BBC News - Technology | 15 Aug 2018 | 8:05 am
Source: BBC News - Technology | 15 Aug 2018 | 7:26 am
Source: BBC News - Technology | 15 Aug 2018 | 7:04 am
Twitter Inc. is the latest social network to take action against far-right commentator Alex Jones, temporarily limiting his account after he tweeted a link to a video that violated company policies against abusive behavior.
The ban is not extensive. Jones will still be able to browse Twitter and send direct messages to his followers, however he won’t be able post publicly for seven days. The Twitter account for his show @InfoWars remains active.
“On Twitter we’ve been so careful,” he said in a video on the @InfoWars account, adding that Twitter Chief Executive Officer Jack Dorsey is “toying with us” like a cat and a mouse.
A Twitter spokesman confirmed the account has limited functionality. “We haven’t suspended the account but are requiring tweets which contained a broadcast in violation of our rules are deleted,” he added in a statement.
Dorsey said earlier in August that Jones and his affiliate accounts can continue to use Twitter because they haven’t violated the social-media company’s policies, despite decisions by Facebook Inc. and Google’s YouTube to pull the conspiracy theorist off their platforms after concluding that his content violates hate speech and harassment policies.
Source: Tech – TIME | 15 Aug 2018 | 6:51 am
Source: BBC News - Technology | 15 Aug 2018 | 3:27 am
Source: BBC News - Technology | 15 Aug 2018 | 3:06 am
Source: BBC News - Technology | 14 Aug 2018 | 9:47 pm
Source: BBC News - Technology | 14 Aug 2018 | 7:22 pm
Source: BBC News - Technology | 14 Aug 2018 | 7:16 pm
Source: BBC News - Technology | 14 Aug 2018 | 2:17 pm
Source: BBC News - Technology | 14 Aug 2018 | 12:39 pm
Progressives are singling out technology companies for new regulations. For instance, in New York, the City Council has just voted to cap the number of ridesharing vehicles for services like Uber and Lyft, and may require that drivers earn a minimum amount. Last month, California imposed costly and complex regulations on the voluntary exchange of data with such services as Facebook and Google. But progressives also argue that inequality is the defining issue of our time, and these regulations hamper these companies’ contributions to reduce such inequality.
Goods and services available at no cost boost equality, because most people can enjoy them. Facebook and Google provide search, maps, and connections that benefit billions of people. Once a consumer pays for an internet connection, they get such services at what an economist would call a marginal cost of zero. And these free services are valuable, as shown by the ever increasing amount of time people spend using them.
To be sure, in return for these benefits, we give up personal data that provide profits to companies like Facebook and Google because they can then sell targeted advertisements. But taking account of the data’s value underscores the equalizing force of the services provided by the tech giants. The wealthier a user is, the more valuable his or her data. Thus, many people of modest means are getting a substantially greater net benefit than the wealthy.
Ridesharing services are not free, but they nevertheless democratize the amenities of transportation, providing a better life to those who are not part of the one percent. The very rich have traditionally enjoyed chauffeurs who are available to give them a quality ride anywhere at moment’s notice. Ridesharing services provide a good approximation of a chauffeur at the touch of an app, even in the rain or in an unfamiliar town. More generally, a defining characteristic of being very wealthy has been having servants at beck and call. The sharing and gig economies create information infrastructures to make providers seamlessly available when the middle class needs them most.
Those providing these innovations gain benefits as well. For instance, ridesharing drivers get advantages that medallion taxis drivers don’t. Since some have few fixed costs, using the app they can make stop and start decisions as they please, which allows them to more easily take care of family or pursue other activities. Economists have estimated that this amenity is worth as much as 40 percent in addition to their dollar earnings.
Airbnb similarly promotes equality because it allows people of modest means to monetize their single greatest asset: their home. In contrast, rich people’s wealth is already monetized, as it is often predominantly in securities. But before Airbnb, it was hard for most people to find a market to rent a spare room or their entire home when they were on vacation.
But new progressive regulation will reduce these benefits. For instance, proposals to treat ridesharing participants as employees would reduce the flexibility that permits the supply of rideshares to expand at a moment’s notice to meet demand. Requiring Uber and Lyft drivers to earn a minimum amount is likely to raise prices, shrinking the service’s benefits to both passengers and drivers. Similarly, restrictions on Airbnb make it much more difficult for people to earn income from their most valuable asset.
Meanwhile, If other jurisdictions follow California’s recent legislation and impose complex and costly regulations on the voluntary exchange of data for services, it will reduce the incentives of companies like Google and Facebook to provide further free goods to the world. Thirty years ago, accessing all the world’s information for free with the click of a mouse was the stuff of science fiction. We cannot predict the innovative free services of tomorrow — but they too are likely dependent on the stream of income that tech companies get from aggregating our data.
Similarly, as the gig and sharing economies expand from ridesharing and room sharing to rent a chef or handyman, they more broadly distribute the personal services that once were the sole province of the rich. But targeting this economy with new regulations reduces further gains for equality.
To be sure, the information technology component of services should not immunize companies from general laws. If Airbnb hosts are discriminating against guests on the basis of race, they should be fined. If Uber is misleading its drivers into renting cars at bad prices, that practice should be prohibited. Facebook and Google, like other companies, should be forced to disclose to consumers exactly the terms to which they are agreeing. But targeting these companies with special regulation helps their competitors and harms not only efficiency, but equality.
But the progressive yen to regulate the information economy ignores an essential truth: information technology helps equalize consumption. We cannot eat the same apple or live in the same house, but we can all benefit from the same information, be it the information on the internet or the network of drivers and short term rental properties that the sharing economy provides. Innovation in information delivery will continue to have important leveling effects if regulation gets out of the way.
Source: Tech – TIME | 10 Aug 2018 | 1:22 pm
Gaming sensation Fortnite will be immediately available for download on recent Samsung mobile devices, Epic Games CEO Tim Sweeney announced at a Samsung event Thursday, marking the first time it will be playable on Android handsets.
Samsung users can download Fortnite from Samsung’s Game Launcher app on the new Galaxy Note 9 and Tab S4, as well as the Galaxy S9, S9 Plus, Note 8, Galaxy S8, Galaxy S8 Plus, Galaxy S7, Galaxy S7 Edge, and Tab S3.
Fortnite will be available on other, non-Samsung Android devices in “the next few days,” according to The Verge.
Fortnite developer Epic Games caused a stir when it recently announced Fortnite will not be available on the Google Play store. Instead, many Android users will have to download the game via an installer on Epic’s website. The move will allow Epic to bypass the 30% revenue cut that Google normally takes from purchases made via Google Play. While Fortnite is free to download, it offers a variety of paid in-game downloads, like player avatar customizations.
The game is already a sensation, with an estimated 125 million players across console, PC and iOS platforms and over $1 billion in revenue. The release on Android marks Fortnite’s first move to tap into one of the world’s largest platforms, with over 2 billion active Android devices.
Source: Tech – TIME | 9 Aug 2018 | 1:45 pm
(Bloomberg) — Samsung Electronics Co. unveiled the Galaxy Note 9 in New York, banking on the larger-screen device to rejuvenate sales of a struggling flagship line and fend off Apple Inc.’s upcoming iPhones over the holidays.
The 6.4-inch screen Note 9 will start at $999.99 and max out at $1,249.99 — becoming, at about $100 above the iPhone X’s upper limit, one of the world’s most expensive consumer phones. It looks similar to last year’s 6.1-inch Note 8 but sports a revamped Bluetooth stylus — a longtime selling point of the Note series — as well as an upgraded camera that takes sharper photos than the S9 released earlier this year, Samsung said Thursday.
Samsung’s latest device enters the ring at a time of slowing smartphone demand globally and a disappointing performance by its cousin, the Galaxy S9. That marquee gadget failed to capture consumers’ imagination or stop Huawei Technologies Co. and Xiaomi Corp. from grabbing market share at the Korean giant’s expense. It’ll also go up against the new iPhones, typically unveiled in September.
“The product was too similar to the S8. It wasn’t distinctive enough for consumers to justify the upgrade,” Bryan Ma, vice president of devices research at IDC, said. “My worry is that the Note 9 may meet the same fate.”
Samsung is counting on its latest device to lead the charge during the crucial holiday season and revitalize a mobile division where profits almost halved last quarter. After a robust decade of growth, demand is cooling as consumers wait longer to replace devices, even as cheaper Chinese brands flood the market and chip away at Samsung and Apple’s longstanding dominance.
Samsung blamed itself partly for the disappointing performance, saying on an earnings call that it’s played too safe with smartphones too long. Since the recall of the fire-prone Note 7 that cost the company billions of dollars, the company has intensified quality inspections, even if that meant withholding innovations from consumers.
That stance is easing with executives promising to introduce eye-opening features more aggressively. Faster 5G internet connectivity is one of the features Samsung is striving to bring to consumers, they said on an earnings conference call last month.
A new stylus called the S Pen is this year’s highlight upgrade. It will let users remotely control the Note 9’s camera and switch between slides in a presentation, the company said. It’ll also allow more accurate writing and drawing on the phone’s screen. The Note 9’s camera upgrade is on par with the one given to the S9 in March, adding enhanced colors and exposure. It also has a relocated fingerprint scanner on the back but not one built into the screen, something the company has said it’s developing.
The Note 9, which comes in multiple hues including blue and purple in the U.S and black and copper internationally, sports an upgraded version of Samsung’s DeX system. This feature lets users connect their device to a computer display using a separate accessory, essentially turning the smartphone into a full-featured desktop with apps. The Note 9 is designed to encourage adoption of the feature by allowing users to connect the phone to a monitor via an HDMI cable, bypassing the need to buy a separate docking station.
Even in tough times, Samsung has a solid source of income it can lean on for investment: memory chips, an industry the world’s biggest chipmaker controls with SK Hynix and Micron. Samsung also supplies the organic light-emitting diode screens that go into premium devices such as the iPhone X.
Solid cash reserves also helped the South Korean company set up the world’s biggest smartphone factory in India this year, a banner event that drew the leaders of the two countries along with Vice Chairman Jay Y. Lee, Samsung’s de facto head.
At the New York event on Thursday, Samsung also introduced a new Galaxy Watch that competes with a similar product from Apple. The redesigned smartwatch has a circular screen, is water-resistant, and can connect to LTE cellular networks, the company said. It has improved battery life over previous Samsung watch models, and will be compatible with a new charger that can simultaneously charge smartphones and the watch.
The gadget will feature revamped health software that works with the heart-rate sensor. It has new tracking functionality for workouts and auto-detection for when a person begins a run, for example. It also has sleep tracking, providing detail into both hours and quality of sleep.
Samsung also debuted a new product product category for its line, the Galaxy Home speaker. It enters a crowded market with Amazon.com Inc.’s Echo, Alphabet Inc.’s Google Home and the Apple HomePod. The new speaker has eight microphones and focuses on audio quality, Samsung said. The device has a mesh black design and a tripod-like stand. Samsung called the announcement a preview and said it would share more details in the near future.
Source: Tech – TIME | 9 Aug 2018 | 1:28 pm
Sitting in front of a computer not long ago, a tenured history professor faced a challenge that billions of us do every day: deciding whether to believe something on the Internet.
On his screen was an article published by a group called the American College of Pediatricians that discussed how to handle bullying in schools. Among the advice it offered: schools shouldn’t highlight particular groups targeted by bullying because doing so might call attention to “temporarily confused adolescents.”
Scanning the site, the professor took note of the “.org” web address and a list of academic-looking citations. The site’s sober design, devoid of flashy, autoplaying videos, lent it credibility, he thought. After five minutes, he had found little reason to doubt the article. “I’m clearly looking at an official site,” he said.
What the professor never realized as he focused on the page’s superficial features is that the group in question is a socially conservative splinter faction that broke in 2002 from the mainstream American Academy of Pediatrics over the issue of adoption by same-sex couples. It has been accused of promoting antigay policies, and the Southern Poverty Law Center designates it as a hate group.
Trust was the issue at hand. The bookish professor had been asked to assess the article as part of an experiment run by Stanford University psychologist Sam Wineburg. His team, known as the Stanford History Education Group, has given scores of subjects such tasks in hopes of answering two of the most vexing questions of the Internet age: Why are even the smartest among us so bad at making judgments about what to trust on the web? And how can we get better?
Wineburg’s team has found that Americans of all ages, from digitally savvy tweens to high-IQ academics, fail to ask important questions about content they encounter on a browser, adding to research on our online gullibility. Other studies have shown that people retweet links without clicking on them and rely too much on search engines. A 2016 Pew poll found that nearly a quarter of Americans said they had shared a made-up news story. In his experiments, MIT cognitive scientist David Rand has found that, on average, people are inclined to believe false news at least 20% of the time. “We are all driving cars, but none of us have licenses,” Wineburg says of consuming information online.
Our inability to parse truth from fiction on the Internet is, of course, more than an academic matter. The scourge of “fake news” and its many cousins–from clickbait to “deep fakes” (realistic-looking videos showing events that never happened)–have experts fearful for the future of democracy. Politicians and technologists have warned that meddlers are trying to manipulate elections around the globe by spreading disinformation. That’s what Russian agents did in 2016, according to U.S. intelligence agencies. And on July 31, Facebook revealed that it had found evidence of a political-influence campaign on the platform ahead of the 2018 midterm elections. The authors of one now defunct page got thousands of people to express interest in attending a made-up protest that apparently aimed to put white nationalists and left-wingers on the same streets.
But the stakes are even bigger than elections. Our ability to vet information matters every time a mother asks Google whether her child should be vaccinated and every time a kid encounters a Holocaust denial on Twitter. In India, false rumors about child kidnappings that spread on WhatsApp have prompted mobs to beat innocent people to death. “It’s the equivalent of a public-health crisis,” says Alan Miller, founder of the nonpartisan News Literacy Project.
There is no quick fix, though tech companies are under increasing pressure to come up with solutions. Facebook lost more than $120 billion in stock value in a single day in July as the company dealt with a range of issues limiting its growth, including criticism about how conspiracy theories spread on the platform. But engineers can’t teach machines to decide what is true or false in a world where humans often don’t agree.
In a country founded on free speech, debates over who adjudicates truth and lies online are contentious. Many welcomed the decision by major tech companies in early August to remove content from florid conspiracy theorist Alex Jones, who has alleged that passenger-jet contrails are damaging people’s brains and spread claims that families of Sandy Hook massacre victims are actors in an elaborate hoax. But others cried censorship. And even if law enforcement and intelligence agencies could ferret out every bad actor with a keyboard, it seems unwise to put the government in charge of scrubbing the Internet of misleading statements.
What is clear, however, is that there is another responsible party. The problem is not just malicious bots or chaos-loving trolls or Macedonian teenagers pushing phony stories for profit. The problem is also us, the susceptible readers. And experts like Wineburg believe that the better we understand the way we think in the digital world, the better chance we have to be part of the solution.
We don’t fall for false news just because we’re dumb. Often it’s a matter of letting the wrong impulses take over. In an era when the average American spends 24 hours each week online–when we’re always juggling inboxes and feeds and alerts–it’s easy to feel like we don’t have time to read anything but headlines. We are social animals, and the desire for likes can supersede a latent feeling that a story seems dicey. Political convictions lead us to lazy thinking. But there’s an even more fundamental impulse at play: our innate desire for an easy answer.
Humans like to think of themselves as rational creatures, but much of the time we are guided by emotional and irrational thinking. Psychologists have shown this through the study of cognitive shortcuts known as heuristics. It’s hard to imagine getting through so much as a trip to the grocery store without these helpful time-savers. “You don’t and can’t take the time and energy to examine and compare every brand of yogurt,” says Wray Herbert, author of On Second Thought: Outsmarting Your Mind’s Hard-Wired Habits. So we might instead rely on what is known as the familiarity heuristic, our tendency to assume that if something is familiar, it must be good and safe.
These habits of mind surely helped our ancestors survive. The problem is that relying on them too much can also lead people astray, particularly in an online environment. In one of his experiments, MIT’s Rand illustrated the dark side of the fluency heuristic, our tendency to believe things we’ve been exposed to in the past. The study presented subjects with headlines–some false, some true–in a format identical to what users see on Facebook. Rand found that simply being exposed to fake news (like an article that claimed President Trump was going to brink back the draft) made people more likely to rate those stories as accurate later on in the experiment. If you’ve seen something before, “your brain subconsciously uses that as an indication that it’s true,” Rand says.
This is a tendency that propagandists have been aware of forever. The difference is that it has never been easier to get eyeballs on the message, nor to get enemies of the message to help spread it. The researchers who conducted the Pew poll noted that one reason people knowingly share made-up news is to “call out” the stories as fake. That might make a post popular among like-minded peers on social media, but it can also help false claims sink into the collective consciousness.
Academics are only beginning to grasp all the ways our brains are shaped by the Internet, a key reason that stopping the spread of misinformation is so tricky. One attempt by Facebook shows how introducing new signals into this busy domain can backfire. With hopes of curtailing junk news, the company started attaching warnings to posts that contained claims that fact-checkers had rated as false. But a study found that this can make users more likely to believe any unflagged post. Tessa Lyons-Laing, a product manager who works on Facebook’s News Feed, says the company toyed with the idea of alerting users to hoaxes that were traveling around the web each day before realizing that an “immunization approach” might be counterproductive. “We’re really trying to understand the problem and to be thoughtful about the research and therefore, in some cases, to move slower,” she says.
Part of the issue is that people are still relying on outdated shortcuts, the kind we were taught to use in a library. Take the professor in Wineburg’s study. A list of citations means one thing when it appears in a book that has been vetted by a publisher, a fact-checker and a librarian. It means quite another on the Internet, where everyone has access to a personal printing press. Newspapers used to physically separate hard news and commentary, so our minds could easily grasp what was what. But today two-thirds of Americans get news from social media, where posts from publishers get the same packaging as birthday greetings and rants. Content that warrants an emotional response is mixed with things that require deeper consideration. “It all looks identical,” says Harvard researcher Claire Wardle, “so our brain has to work harder to make sense of those different types of information.”
Instead of working harder, we often try to outsource the job. Studies have shown that people assume that the higher something appears in Google search results, the more reliable it is. But Google’s algorithms are surfacing content based on keywords, not truth. If you ask about using apricot seeds to cure cancer, the tool will dutifully find pages asserting that they work. “A search engine is a search engine,” says Richard Gingras, vice president of news at Google. “I don’t think anyone really wants Google to be the arbiter of what is or is not acceptable expression.”
That’s just one example of how we need to retrain our brains. We’re also inclined to trust visuals, says Wardle. But some photos are doctored, and other legitimate ones are put in false contexts. On Twitter, people use the size of others’ followings as a proxy for reliability, yet millions of followers have been paid for (and an estimated 10% of “users” may be bots). In his studies, Wineburg found that people of all ages were inclined to evaluate sources based on features like the site’s URL and graphic design, things that are easy to manipulate.
It makes sense that humans would glom on to just about anything when they’re so worn out by the news. But when we resist snap judgments, we are harder to fool. “You just have to stop and think,” Rand says of the experiments he has run on the subject. “All of the data we have collected suggests that’s the real problem. It’s not that people are being super-biased and using their reasoning ability to trick themselves into believing crazy stuff. It’s just that people aren’t stopping. They’re rolling on.”
That is, of course, the way social-media platforms have been designed. The endless feeds and intermittent rewards are engineered to keep you reading. And there are other environmental factors at play, like people’s ability to easily seek out information that confirms their beliefs. But Rand is not the only academic who believes that we can take a big bite out of errors if we slow down.
Wineburg, an 18-year veteran of Stanford, works out of a small office in the center of the palm-lined campus. His group’s specialty is developing curricula that teachers across the nation use to train kids in critical thinking. Now they’re trying to update those lessons for life in a digital age. With the help of funding from Google, which has devoted $3 million to the digital-literacy project they are part of, the researchers hope to deploy new rules of the road by next year, outlining techniques that anyone can use to draw better conclusions on the web.
His group doesn’t just come up with smart ideas; it tests them. But as they set out to develop these lessons, they struggled to find research about best practices. “Where are the studies about what superstars do, so that we might learn from them?” Wineburg recalls thinking, sitting in the team’s office beneath a print of the Tabula Rogeriana, a medieval map that pictures the world in a way we now see as upside-down. Eventually, a cold email to an office in New York revealed a promising model: professional fact-checkers.
Fact-checkers, they found, didn’t fall prey to the same missteps as other groups. When presented with the American College of Pediatricians task, for example, they almost immediately left the site and started opening new tabs to see what the wider web had to say about the organization. Wineburg has dubbed this lateral reading: if a person never leaves a site–as the professor failed to do–they are essentially wearing blinders. Fact-checkers not only zipped to additional sources, but also laid their references side by side, to better keep their bearings.
In another test, the researchers asked subjects to assess the website MinimumWage.com. In a few minutes’ time, 100% of fact-checkers figured out that the site is backed by a PR firm that also represents the restaurant industry, a sector that generally opposes raising hourly pay. Only 60% of historians and 40% of Stanford students made the same discovery, often requiring a second prompt to find out who was behind the site.
Another tactic fact-checkers used that others didn’t is what Wineburg calls “click restraint.” They would scan a whole page of search results–maybe even two–before choosing a path forward. “It’s the ability to stand back and get a sense of the overall territory in which you’ve landed,” he says, “rather than promiscuously clicking on the first thing.” This is important, because people or organizations with an agenda can game search results by packing their sites with keywords, so that those sites rise to the top and more objective assessments get buried.
The lessons they’ve developed include such techniques and teach kids to always start with the same question: Who is behind the information? Although it is still experimenting, a pilot that Wineburg’s team conducted at a college in California this past spring showed that such tiny behavioral changes can yield significant results. Another technique he champions is simpler still: just read it.
One study found that 6 in 10 links get retweeted without users’ reading anything besides someone else’s summation of it. Another found that false stories travel six times as fast as true ones on Twitter, apparently because lies do a better job of stimulating feelings of surprise and disgust. But taking a beat can help us avoid knee-jerk reactions, so that we don’t blindly add garbage to the vast flotillas already clogging up the web. “What makes the false or hyperpartisan claims do really well is they’re a bit outlandish,” Rand says. “That same thing that makes them successful in spreading online is the same thing that, on reflection, would make you realize it wasn’t true.”
Tech companies have a big role to play in stemming the tide of misinformation, and they’re working on it. But they have also realized that what Harvard’s Wardle calls our “information disorder” cannot be solved by engineers alone. Algorithms are good at things like identifying fake accounts, and platforms are flagging millions of them every week. Yet machines could only take Facebook so far in identifying the most recent influence campaign.
One inauthentic page, titled “Resisters,” ginned up a counterprotest to a “white civil rights” rally planned for August in Washington, D.C., and got legitimate organizations to help promote it. More than 2,600 people expressed interest in going before Facebook revealed that the page was part of a coordinated operation, disabled the event and alerted users. The company has hired thousands of content reviewers that have the sophistication to weed through tricky mixes of truth and lies. But Facebook can’t employ enough humans to manually review the billions of posts that are put up each day, across myriad countries and languages.
Many misleading posts don’t violate tech companies’ terms of service. Facebook, one of the firms that removed content from Jones, said the decision did not relate to “false news” but prohibitions against rhetoric such as “dehumanizing language.” Apple and Spotify cited rules against hate speech, which is generally protected by the First Amendment. “With free expression, you get the good and the bad, and you have to accept both,” says Google’s Gingras. “And hopefully you have a society that can distinguish between the two.”
You also need a society that cares about that distinction. Schools make sense as an answer, but it will take money and political will to get new curricula into classrooms. Teachers must master new material and train students to be skeptical without making them cynical. “Once you start getting kids to question information,” says Stanford’s Sarah McGrew, “they can fall into this attitude where nothing is reliable anymore.” Advocates want to teach kids other defensive skills, like how to reverse-search an image (to make sure a photo is really portraying what someone says it is) and how to type a neutral query into the search bar. But even if the perfect lessons are dispersed for free online, anyone who has already graduated will need to opt in. They will have to take initiative and also be willing to question their prejudices, to second-guess information they might like to believe. And relying on open-mindedness to defeat tribal tendencies has not proved a winning formula in past searches for truth.
That is why many advocates are suggesting that we reach for another powerful tool: shame. Wardle says we need to make sharing misinformation as shameful as drunk driving. Wineburg invokes the environmental movement, saying we need to cultivate an awareness of “digital pollution” on the Internet. “We have to get people to think that they are littering,” Wineburg says, “by forwarding stuff that isn’t true.” The idea is to make people see the aggregate effect of little actions, that one by one, ill-advised clicks contribute to the web’s being a toxic place. Having a well-informed citizenry may be, in the big picture, as important to survival as having clean air and water. “If we can’t come together as a society around this issue,” Wineburg says, “it is our doom.”
This appears in the August 20, 2018 issue of TIME.
Source: Tech – TIME | 9 Aug 2018 | 6:19 am
The powerful 6.9-magnitude earthquake that struck the island of Lombok has left more than 130 people dead and displaced thousands.
While exchanging messages on Facebook about the earthquake, many Indonesian speakers used the word “selamat,” which has several meanings including “safe,” “unhurt” or “congratulations” depending on the context. Facebook’s feature misinterpreted the comments and automatically sent out celebratory animations.
“Congrats” in Indonesian is “selamat”. Selamat also means “to survive.”
After the 6.9 magnitude earthquake in Lombok, Facebook users wrote “I hope people will survive”. Then Facebook highlighted the word “selamat” and throw some balloons and confetti. pic.twitter.com/DEhYLqHWUz
— Herman Saksono (@hermansaksono) August 6, 2018
“We regret that it appeared in this unfortunate context and have since turned off the feature locally,” Facebook spokesperson Lisa Stratton said in a statement to news site Motherboard. “Our hearts go out to the people affected by the earthquake.”
According to authorities, more than 156,000 people have been displaced and tens of thousands of homes have been destroyed, BBC reports. The Indonesian Red Cross said Thursday that an estimated 20,000 people in remote areas of the island are still without aid.
The National Disaster Mitigation Agency said that at least 131 people were killed in Sunday’s earthquake, but other agencies say the death toll has risen to more than 300.
Off the coast of Lombok, some 5,000 foreign and Indonesian tourists have been evacuated from three outlying islands, AP reports.
Source: Tech – TIME | 9 Aug 2018 | 1:46 am
Facing mounting scrutiny for allowing conspiracy theorist Alex Jones to remain on the platform, Twitter CEO Jack Dorsey gave an exclusive radio interview to Fox News host Sean Hannity on Wednesday, explaining how such decisions are made.
Dorsey said that when considering whether to remove extremist accounts from the site, he relies on reports from users who are experiencing or witnessing harassment and then considers the “context of everything that’s happening around it.”
“There might be violent extremist groups that try to get onto our service, and we take that into consideration. We also look, in those particular cases, at off-platform behavior as well,” he said.
Jones — whose content was removed this week from Facebook, YouTube, Spotify and Apple for violating hate speech guidelines — has fueled false conspiracy theories about the deadly 2012 Sandy Hook elementary school shooting. The conspiracy theories have resulted in harassment and death threats to victims’ family members, some of whom have been forced into hiding and are now suing Jones for defamation. But Twitter has decided not to suspend Jones or his website InfoWars.
“We know that’s hard for many but the reason is simple: he hasn’t violated our rules. We’ll enforce if he does. And we’ll continue to promote a healthy conversational environment by ensuring tweets aren’t artificially amplified,” Dorsey said in a series of tweets on Tuesday. “Truth is we’ve been terrible at explaining our decisions in the past. We’re fixing that. We’re going to hold Jones to the same standard we hold to every account, not taking one-off actions to make us feel good in the short term, and adding fuel to new conspiracy theories.”
Hannity’s own role in promoting a conspiracy theory about the death of Democratic National Committee staffer Seth Rich sparked criticism and cost him advertisers last year. On Wednesday, Hannity praised Dorsey and asked if he had received requests to ban him from Twitter as well.
“I haven’t heard those requests directly, but I’m sure someone is saying it somewhere,” Dorsey said.
Asked whether Twitter should allow all kinds of speech to exist on the platform, except for violence, Dorsey said there should be some limits.
“I think there’s always boundaries to that,” he said. “You enumerated a number of them around violent threats or giving up personal information around someone’s home or office, or identifiable information that people could utilize to put them in real physical harm. We need to balance all of those constraints. We’ve tried to codify them in our terms of service. We do believe in the power of free expression, but we also need to balance that with the fact that bad-faith actors intentionally try to silence other voices.”
Source: Tech – TIME | 8 Aug 2018 | 8:50 pm
Twitter CEO Jack Dorsey took to his own social media platform Tuesday to explain why the company decided not to ban conspiracy theorist and conservative radio show host Alex Jones, unlike other major tech companies.
Over the past week, Apple, Spotify, Facebook and YouTube each banned or removed content from Jones’ pages, channels and his website Infowars from their platforms, citing community guidelines against hate speech.
In a series of tweets Tuesday, Dorsey said that Jones has not violated Twitter’s rules.
“We didn’t suspend Alex Jones or Infowars yesterday,” Dorsey wrote. “We know that’s hard for many but the reason is simple: he hasn’t violated our rules. We’ll enforce if he does. And we’ll continue to promote a healthy conversational environment by ensuring tweets aren’t artificially amplified.”
We didn’t suspend Alex Jones or Infowars yesterday. We know that’s hard for many but the reason is simple: he hasn’t violated our rules. We’ll enforce if he does. And we’ll continue to promote a healthy conversational environment by ensuring tweets aren’t artificially amplified.
— jack (@jack) August 8, 2018
In the same thread, Dorsey said Twitter in the past has “been terrible at explaining our decisions” but that they’re “fixing that.”
“We’re going to hold Jones to the same standard we hold to every account, not taking one-off actions to make us feel good in the short term, and adding fuel to new conspiracy theories,” he said.
Truth is we’ve been terrible at explaining our decisions in the past. We’re fixing that. We’re going to hold Jones to the same standard we hold to every account, not taking one-off actions to make us feel good in the short term, and adding fuel to new conspiracy theories.
— jack (@jack) August 8, 2018
Twitter has faced significant backlash for not taking action against neo-Nazi, white supremacist, and other extreme views.
Further in Dorsey’s thread, he said it was up to journalists to “document, validate, and refute” claims like Jones’. “That is what serves the public conversation best.”
Accounts like Jones' can often sensationalize issues and spread unsubstantiated rumors, so it’s critical journalists document, validate, and refute such information directly so people can form their own opinions. This is what serves the public conversation best.
— jack (@jack) August 8, 2018
Jones is currently being sued by the parents of Sandy Hook victims for claiming the 2012 mass school shooting was a hoax.
Source: Tech – TIME | 7 Aug 2018 | 11:04 pm
(CAPE CANAVERAL, Fla.) — SpaceX used its newest style booster for a second time to put a communications satellite into orbit for Indonesia.
The Falcon 9 rocket blasted off early Tuesday morning from Cape Canaveral, Florida.
The first-stage booster previously soared in May, the first time out the gate for this upgraded rocket. After performing its latest job, the booster landed upright on a floating platform in the pitch-black Atlantic.
Each rocket in this new and improved line is intended for dozens of repeat flights. SpaceX is striving to lower launch costs through rocket recycling. SpaceX founder Elon Musk’s goal is for swift launch turnarounds using the same rocket, even twice within 24 hours. He says that could happen as early as next year.
Source: Tech – TIME | 7 Aug 2018 | 11:59 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 11:06 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 10:56 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 10:18 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 9:37 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 8:42 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 8:32 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 8:25 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 7:00 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 6:49 am
Source: Latest articles for ZDNet | 4 Apr 2018 | 6:34 am
Source: ComputerWeekly.com | 29 Mar 2018 | 1:15 pm
Source: ComputerWeekly.com | 29 Mar 2018 | 6:58 am
Source: ComputerWeekly.com | 29 Mar 2018 | 5:00 am
Source: ComputerWeekly.com | 29 Mar 2018 | 4:45 am
Source: ComputerWeekly.com | 29 Mar 2018 | 12:48 am
Source: ComputerWeekly.com | 29 Mar 2018 | 12:15 am
Source: ComputerWeekly.com | 28 Mar 2018 | 12:30 pm
Source: ComputerWeekly.com | 28 Mar 2018 | 9:30 am
Source: CNN.com - Technology | 18 Nov 2016 | 3:21 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:21 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:17 pm
Source: CNN.com - Technology | 18 Nov 2016 | 3:17 pm
Source: CNN.com - Technology | 17 Nov 2016 | 11:12 am
Source: CNN.com - Technology | 17 Nov 2016 | 11:07 am
Source: CNN.com - Technology | 16 Nov 2016 | 12:38 pm
Source: CNN.com - Technology | 11 Nov 2016 | 4:56 pm
Source: CNN.com - Technology | 11 Nov 2016 | 4:55 pm
Source: CNN.com - Technology | 8 Nov 2016 | 1:17 pm
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 3:37 pm
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 15 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 9:01 pm
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 2:55 pm
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am
Source: AustralianIT.com.au | IT News Top Stories | 14 Jun 2016 | 10:00 am