Now Hiring: Visit our careers section to know more
  • +91 74833 41463
  • Novel Office 13th Cross, Baldwins Road, Koramangala, Bengaluru 560030

A Dozen Experts with Questions -Tech CEOs On Disinformation and Extremism

106479092-1586288296539gettyimages-1194872530
General / technology

A Dozen Experts with Questions -Tech CEOs On Disinformation and Extremism

Questions for all three Tech CEOs
1. The First Amendment is often erroneously invoked to suggest that your companies cannot or should not restrict content. But as you know, the First Amendment actually gives you, as private businesses, the right to set terms of service as you see fit, including what kind of content and conduct to allow, and to deny the use of your services to those who violate those terms. What specific actions are each of your companies doing to exercise those First Amendment rights to ensure that your platforms and services are not plagued with dangerous misinformation? (Mary Anne Franks)

2. One highly influential piece of misinformation is that the tech industry is biased against conservative figures and conservative content. Conservative figures and content actually perform very well on social media sites such as Facebook, even though they disproportionately violate companies’ policies against misinformation and other abuse. Are each of you willing to commit, going forward, to enforcing your policies against misinformation and other abuses of your policies regardless of accusations of political bias”? (Mary Anne Franks)

3. The principle of collective responsibility is a familiar concept in the physical world. A person can be partly responsible for harm even if he did not intend for it to happen and was not its direct cause. For instance, a hotel can be held accountable for failing to provide adequate security measures against foreseeable criminal activity against its guests. An employer can be liable for failing to address sexual harassment in the workplace. Do you believe that tech companies are exempt from the principle of collective responsibility, and if so, why? (Mary Anne Franks)

4. Do you think your services’ responsibilities to address disinformation vary depending on whether the content is organically posted by users, versus placed as paid advertising? (Jonathan Zittrain)

5. Along with law enforcement agencies, Congress is conducting multiple lines of inquiry into January 6th and there may indeed be a National Commission that will have the role of the tech platforms in its remit. Have you taken proactive steps to preserve evidence that may be relevant to the election disinformation campaign that resulted in the January 6th siege on the Capitol, and to preserve all accounts, groups and exchanges of information on your sites that may be associated with parties that participated in it? (Justin Hendrix)

6. Looking beyond content moderation, can you explain what exactly you have done to ensure your tools — your algorithms, recommendation engines, and targeting tools — are not amplifying conspiracy theories and disinformation, connecting people to dangerous content, or recommending hate groups or purveyors of disinformation and conspiracy theories to people? For example, can you provide detailed answers on some of the Capitol riot suspects’ use history, to include the following:

Facebook: Which Facebook groups were they members of? Did Facebook recommend those groups, or did the individuals search for the specific groups on their own? What ads were targeted at them based on either the data you gathered or interests you inferred about them? Were they connected to any known conspiracy theorists, QAnon believers, or other known January 6th rioters due to Facebook’s recommendations?
YouTube: Of the videos the individuals watched with Stop the Steal content, calls to question the election, white supremacy content and other hate and conspiracy content, how many were recommended by YouTube to the viewer?
Twitter: Were any of the conspiracy theorists or other purveyors of electoral misinformation and Stop the Steal activity recommended to them as people to follow? Were their feeds curated to show more Stop the Steal and other conspiracy theory tweets than authoritative sources?
And to all: Will you allow any academics and members of this committee to view the data to answer these questions? (Yael Eisenstat)
7. There is a growing body of research on the disproportionate effects of disinformation and white supremacist extremism on women and people of color. This week, there was violence against the Asian American community- the New York Times reports racist memes and posts about Asian-Americans “have created fear and dehumanization,” setting the stage for real-world violence. Can you describe the specific investments you are making on threat analysis and mitigation for these communities? (Justin Hendrix)

Questions for Facebook CEO Mark Zuckerberg

1. Mr. Zuckerberg, you and other Facebook executives have routinely testified to lawmakers and regulators that their AI finds and removes as much as 99% of some forms of objectionable content, such as terrorist propaganda, human trafficking content and, more recently, child sex exploitation content. It is normally understood to mean that Facebook AI and moderators remove 99% of overall content. But can you define clearly that you mean to say your AI removes 99% of what you remove, rather than the total amount of such content? Does Facebook have evidence about its overall rate of removal of terror content, human trafficking content, and child sexual abuse material (CSAM) that it can provide to this Committee? Studies by the Alliance to Counter Crime Online indicate you are removing only about 25-30%. Can you explain the discrepancy? (Gretchen Peters)

2. Facebook executives like to claim that Facebook is just a mirror to society. But multiple studies — including, apparently, internal Facebook studies — have shown that Facebook recommendation tools and groups connect bad actors, amplify illegal and objectionable content and amplify conspiracies and misinformation. Why can’t you, or won’t you, shut down these tools, at least for criminal and disinformation content? (Gretchen Peters)

3. Aside from labeling misinformation or outright deleting it, there’s also the possibility of simply making it circulate less. (a) Is an assessment of misinformation taken into account as Facebook decides what to promote or recommend in feeds? (b) Could users be told if such adjustments are to be applied to what they are sharing? (2) Decisions about content moderation often entail obscuring or deleting some information. Would Facebook be willing to automatically document those actions as they happen, perhaps embargoing or escrowing them with independent research libraries, so that decisions might be understood and evaluated by researchers, and trends made known to the public, later on? (Jonathan Zittrain)

4. How can it be that Facebook and Instagram act to eliminate less than one of every twenty bits of misinformation on Covid and vaccines answered to them by clients? (Imran Ahmed)

5. There is clear proof that Instagram’s algorithm suggests misinformation from notable anti-vaxxers whose accounts have even been granted confirmed status. With lives relying upon the vaccine rollout, when will Facebook address this issue and fix Instagram’s algorithm? Did Facebook perform safety checks to forestall the algorithmic amplification of Covid-19 misinformation? Why were posts with content warnings, for example content warnings about Covid-19 misinformation, advanced into Instagram takes care of? What is the interaction for proposing and advancing posts that are not checked first? (Imran Ahmed)

6. Previous Facebook strategy representatives came forward to say that “Mark personally didn’t care for the discipline, so he changed the standards,” when it came to banning Alex Jones and other fanatics like the Oath Keepers. What job do you play in the moderation of misinformation and choosing what harmful content qualifies for removal? (Imran Ahmed)

7. 2016 research from Facebook showed that 64% of individuals who joined FB bunches advancing radical content did as such at the provoking of Facebook’s recommendation apparatuses. Facebook purportedly changed its approaches. You were recently asked in a Senate hearing whether you had seen a reduction in your platform’s facilitation of radical gathering enlistment since those arrangements were changed, to which you responded, “Senator, I’m not familiar with that particular examination.” Are you now familiar with that review, and what’s your response now — did you see a reduction in your platform’s facilitation of fanatic gathering enrollment since those strategies were changed? (Damian Collins)

8. Did Facebook complete the app audit it guaranteed during the Cambridge Analytica scandal? Have you discovered proof of other apps harvesting Facebook client data in a similar way to Alexander Kogan’s app? Will you make public a rundown of such apps? (Damian Collins)

9. A Washington Post story referred to internal Facebook research that zeroed in on super spreaders of anti-vaccine content. What are the cures you are considering to balance opportunity of expression while perceiving that a submitted handful of individuals are repeatedly responsible for spreading harmful content across Facebook as well as Instagram? (Renée DiResta)

10. A new report in MIT Technology Review found that there is no single team at Facebook tasked with understanding how to alter Facebook’s “content-ranking models to tamp down misinformation and extremism.” Will you submit today to create a department at Facebook that has dominion over all other departments to take care of these issues, regardless of whether it harms Facebook’s momentary business interests? (Justin Hendrix)

11. The Technology Review report also found that you have restricted endeavors to investigate this question because of the impact of your arrangement team, in particular Joel Kaplan, Facebook’s VP of global public strategy. The Technology Review report said that when choosing whether a model expected to address misinformation is fair regarding political belief system, the Facebook Responsible AI team discovered that “fairness” doesn’t mean the model should affect conservative and liberal clients equally. “In the event that conservatives are posting a greater fraction of misinformation, as decided by open consensus, then the model should flag a greater fraction of conservative content. On the off chance that liberals are posting more misinformation, it should flag their content all the more frequently as well.” But the article says individuals from Joel Kaplan’s team “followed exactly the contrary approach: they took ‘fairness’ to mean that these models should not affect conservatives more than liberals. At the point when a model did as such, they would stop its arrangement and demand a change. Once, they obstructed a medical-misinformation finder that had noticeably diminished the reach of anti-vaccine campaigns, the previous researcher advised me. They told the researchers that the model couldn’t be conveyed until the team fixed this discrepancy. However, that adequately made the model meaningless.” In other words, the change would have literally no impact on the actual issue of misinformation. Is it Facebook’s approach to look for political balance in any event, when that means allowing harmful misinformation and disinformation to remain on its platform? (Justin Hendrix)

12. As Facebook continues advancement of artificial knowledge through its Responsible AI project, how is the company sending this technology to restrict the impact of hate discourse, misinformation, and disinformation campaigns on its platform? By late accounts, Facebook is sending AI in help of platform development and not necessarily in light of a legitimate concern for networks they claim to care about. (Erin Shields)

13. Concerning the January 6 attacks, Sheryl Sandberg said, “I think these occasions were largely organized on platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency.” We currently realize that Facebook was the most refered to social media site in charging reports the Justice Department documented against individuals from the Capitol Hill crowd. Can you give us an accurate answer to what number of individuals talked about these plans in Facebook gatherings or on the platform? Of those gatherings where discussions of Stop the Steal or the occasions of January 6 happened, what number of had been flagged for violating your own strategies? What number of remained up, regardless of either internal flags or reports from external gatherings? For what reason did they remain up? (Yael Eisenstat)

14. After a gathering is discovered to be engaged in the propagation of disinformation or planning savagery, for example, the Stop the Steal bunch that was terminated soon after the November third election, what steps do you take? Do you continue to monitor the activities of gathering organizers? Facebook had many Stop the Steal pages that were not deactivated, and many remained active into this year, even after the Capitol attack. What steps do you take to restrict the propagation of false claims in these gatherings? Do you monitor how accounts that participate in banned gatherings later reconnect in new ones? Do you communicate to the participating accounts about why the gathering was deactivated? (Bryan Jones)

15. In April of last year, the Tech Transparency Project distinguished 125 Facebook bunches dedicated to the boogaloo young men, for certain sharing tips on tactical organizing and instructions for making gasoline bombs and Molotov cocktails. Only weeks after TTP’s report, various boogaloo allies were arrested by the FBI Joint Terrorism Task Force in Las Vegas on psychological warfare related charges. They had met in a portion of the same Facebook bunches distinguished by TTP. It was not until after these arrests that Facebook said it would quit prescribing gatherings and pages related to the boogaloo development. In any case, the issue continued. A brief timeframe later, authorities arrested alleged boogaloo allies Steven Carrillo and Robert Justus for the homicide of a Santa Cruz County agent official. The two men were individuals from Facebook boogaloo gatherings. Facebook finally acted to ban the militant boogaloo development from its platform on June 30, a month after someone was killed. We saw a similar failure by Facebook to address these issues in the Kenosha shootings, where BuzzFeed News tracked down that the Kenosha Guard militia’s occasion posting was accounted for multiple times and not eliminated until after the shooting had taken place. Facebook told the media that it had taken out the occasion, which ended up being false. The militia bunch that organized the occasion actually eliminated it, not Facebook. And simply this month, the FBI informant in the thwarted militia plot to kidnap Michigan Gov. Gretchen Whitmer said that he joined the militia’s Facebook bunch because it was suggested by your algorithms—he didn’t search for it. Facebook VP for global arrangement management and counterterrorism Monika Bickert told the Senate Commerce Committee in September 2019 the company has “a team of in excess of 350 individuals who are primarily dedicated in their responsibilities to countering psychological oppression and hate.” Why is it that even with your specialized teams and AI instruments, outside researchers and journalists continue to easily track down this content on your platform? (Katie Paul)

16. Mr. Zuckerberg, in November you told the Senate Judiciary Committee that “we’re also dislike a news distributer in that we don’t create the content.” But a SEC informant petition in spring 2019 found that Facebook was actually auto-generating business pages for racial oppressor and fear monger gatherings, and that these pages created by Facebook can fill in as a rolodex for fanatic enrollment specialists. Only weeks after this revelation made headlines, your VP, Monika Bickert, was asked about this auto-generation of radical content at a House hearing. One year later, notwithstanding, little appears to have changed. A May 2020 report from the Tech Transparency Project found that Facebook was as yet auto-generating business pages for racial oppressor gatherings. How would you anticipate this Congress, the general population, and your financial backers to believe that your AI can tackle these issues when that same AI isn’t only failing to catch, yet actually creating, pages for radicals? (Katie Paul)

17. After the January 6 Capitol insurrection, a report from BuzzFeed News revealed that Facebook’s algorithms were presenting advertisements for armor and weapons accessories to clients alongside election disinformation and posts about the Capitol revolt. After complaints from lawmakers and Facebook workers, you announced that Facebook would pause these military gear ads through inauguration—however that doesn’t appear to have happened. Soon after Facebook’s announcement, BuzzFeed journalists and the Tech Transparency Project continued to

Questions for Google CEO Sundar Pichai

1. How much do you see Google Web search as essentially about “relevance,” rather than tweaking for accuracy? For example, if Google search offers a site containing rank disinformation as the initially hit on a given search, under what circumstances, assuming any, would the company think itself responsible for refactoring the search to surface more accurate information? (Jonathan Zittrain)

2. On a new Atlantic Council webinar, YouTube CEO Susan Wojcicki explained that YouTube didn’t carry out an arrangement about election misinformation until after the states affirmed the election, on December 9. She said that starting then, a person could presently don’t allege the election was because of widespread fraud. In the first place, this asks the undeniable question: Why did you wait until December 9? (Yael Eisenstat)

3. She then continued to explain that because of a “grace period” after the strategy was finally made, Donald Trump’s various violations didn’t tally, and he only has one actual negative mark against him and will be reinstated when YouTube considers there is not, at this point a threat of viciousness. How might you make that assessment? How might you, YouTube, conclude that there is not, at this point a threat of viciousness? And does that mean you will allow Donald Trump, or others with strikes against them, to reinstate their accounts and be allowed to continue spreading mis-and disinformation and conspiracy theories? (Yael Eisenstat)

4. At the Atlantic Council, when asked about vouching for Congress Wojcicki also said: “Whenever asked, I would always attend and be there.” It is my understanding she has turned down various solicitations to affirm. Can you confirm that she will attend whenever she is welcomed by Congress to affirm? Youtube is the second most utilized social media platform boasting 2.3 billion active clients each month checking in 1 billion hours of watch time daily. YouTube has flown under the radar of most as a colossal contributor to the erosion of public trust in information and remains reluctant to engage with stakeholders about these approaches. The public requirements to hear straightforwardly from heads approving and instigating the arrangement decisions that impact content moderation strategies on the platform. (Erin Shields)

5. In the months leading up to the election, Google claimed that it would “shield our clients from harm and abuse, especially during elections.” But an investigation from the Tech Transparency Project (TTP) found that search terms like “register to cast a ballot,” “vote via mail,” and “where is my surveying place” generated ads connecting to sites that charge sham expenses for citizen registration, harvest client data, or plant unwanted software on individuals’ programs. At the point when questioned about the malicious scam ads, Google told media outlets it had eliminated a portion of the ads that charged large expenses to enlist to cast a ballot or looked to harvest client data. In any case, a second investigation by TTP under four months later found that Google continued allowing a few sorts of misleading ads, only weeks before the November election. Is Google unable—or reluctant—to fix issues in its advertising apparatus given that a point like democratic, which was the subject of exceptional national attention and one that Google had vowed to monitor intently, continued displaying issues the company said it addressed? (Katie Paul)

6. As research from Cornell and the Election Integrity Partnership makes clear, YouTube fills in as a library of disinformation content that is frequently used to populate posts on Twitter and Facebook. Because of your three strike framework, it is conceivable that culpable content, regardless of how popular, can continue to be shared and available from YouTube. Is it reasonable to have a strategy where misleading recordings can remain intact on YouTube because an account has failed to accrue three strikes? (Mor Naaman)

Questions for Twitter CEO Jack Dorsey

1. On several occasions you’ve promoted a decentralized version of Twitter, for example, “Bluesky.” How would you envision interventions for disinformation taking place on a circulated version of Twitter, if at all, and what plan of action, assuming any, would Twitter contemplate for such a version? How far along is it, and how open is it in its conception, whether in code or in participation by other software engineers and organizations? (Jonathan Zittrain)

2. Does Twitter accept its labels and other restrictions on Trump and other tweets that shared election disinformation were powerful? How exactly do you measure that adequacy? Why, or why not? What criteria was utilized to settle on these sanctions and who applied them? (Mor Naaman)

3. As the volume and spread of false claims was getting self-evident, when did you initially consider taking action on the absolute most conspicuous accounts spreading disinformation? Researchers have distinguished several top accounts that were generally active in spreading these false claims, remembering for research from the Social Technologies Lab at Cornell Tech, and in a report from the Election Integrity Partnership. Was anyone at Twitter tasked with monitoring or understanding this influencer network as it was advancing? Who was responsible for the decision to continue to allow these accounts to utilize the platform, or to suspend them, and where and when were these decisions made? (Mor Naaman)

4. Mr. Dorsey, following the deadly attack on the US Capitol on January sixth, you presented a detailed strike framework specifically for city trustworthiness strategy. Has Twitter applied this new approach since its creation? And do you expect to expand the strike framework to other trouble spots, like COVID19 misinformation? (Justin Hendrix)

courtesy : justsecurity

Leave your thought here

Your email address will not be published. Required fields are marked *