Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: nwrep

It should not just be the code but also the AI models used to censor posts.


17 posted on 10/28/2022 8:46:23 AM PDT by glorgau
[ Post Reply | Private Reply | To 1 | View Replies ]


To: glorgau

that is code also ...


26 posted on 10/28/2022 8:53:05 AM PDT by bankwalker (Repeal the 19th ...)
[ Post Reply | Private Reply | To 17 | View Replies ]

To: glorgau

Not sure what you mean by “look at the models”. You can see how they were trained. You could also look at their test procedures and results for the models. This is the fun part of AI models, give the model a novel condition and you have a probability of how it will behave. Hacking takes on a new and scarry meaning in an AI world.


73 posted on 10/28/2022 9:30:45 AM PDT by DevonD
[ Post Reply | Private Reply | To 17 | View Replies ]

To: glorgau

I saw a group photo of the kids who oversee what gets pullled

Almost all young hipster males

Surprised me I expected some blue haired fat pasty girls non binary of course


187 posted on 10/29/2022 10:09:32 AM PDT by wardaddy (Sound and Fury Republic now home to more than a few globalists who really love the mainstream media )
[ Post Reply | Private Reply | To 17 | View Replies ]

To: glorgau

One way Twitter could have covered up their bias is outlined below.

Look at the full article for live links and lots of comments.

P.S. Schneier is not a conservative, but he is committed to online privacy, computer security and unrestricted availability of encryption. And he is opposed to government attempts to subvert these things. (He also is a well-known cryptographer, whose algorithms have been in widespread use.)

- - - - - - - - - - - - - - - - - - - - - - - - - -

The following is excerpted from:

https://www.schneier.com/blog/archives/2022/05/manipulating-machine-learning-systems-through-the-order-of-the-training-data.html

Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.

So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set,­ then let initialisation bias do the rest of the work.

Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.


188 posted on 10/29/2022 7:14:52 PM PDT by powerset
[ Post Reply | Private Reply | To 17 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson