We look at the numerous ongoing lawsuits against AI giants; researchers develop a method to address the risk of pre-trained models being repurposed for unethical or harmful tasks.
why the effort to maintain a model’s performance in the original domain, when someone tries to repurpose it to a restricted domain? Wouldn’t it be even more preventative to have the entire model decay with the efforts of fine tuning for such restricted tasks? Why leave a ‘crook’ with a copy of useful core of a model?
why the effort to maintain a model’s performance in the original domain, when someone tries to repurpose it to a restricted domain? Wouldn’t it be even more preventative to have the entire model decay with the efforts of fine tuning for such restricted tasks? Why leave a ‘crook’ with a copy of useful core of a model?