• If you were supposed to get an email from the forum but didn't (e.g. to verify your account for registration), email Wes at [email protected] or talk to me on Discord for help. Sometimes the server hits our limit of emails we can send per hour.
  • Get in our Discord chat! Discord.gg/stararmy
  • 📅 May and June 2024 are YE 46.4 in the RP.

Senate: Defeated [WITHDRAWN] Proposal 92 - Protection of AI

Re: Proposal 92 - Protection of Artificial Intelligence

Hanako's hologram could be seen to throw both hands up in frustration at Mifune.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Sitting back in her Senator's chair, Mifune stated:

"If a person has bought and paid for an Artificial Intelligence from a manufacturer, expecting the Artificial Intelligence to perform one job and one job only, and if accidentally or intentionally Sapience is achieved by this A.I. and it asks for its freedom - that is, to stop doing its job or do some other job, or even go into military service - the person who purchased the artificial intelligence, for a specific purpose, no longer has that artificial intelligence to perform that specific purpose.

"That artificial intelligence may not be 'defective' in the sense of something being wrong with sapience itself, but the merchandise is, frankly, not doing what it was marketed to do, and by that logic, you must admit that it is defective in the eyes of the consumer, who would - in certain situations - no longer have control over their purchased program and would furthermore not be provided any recompense. Their only option would be negotiating with a newly sapient individual, who may or may not agree with them, or buying another artificial intelligence, which would put them at detriment. If you say that the producer should give some sort of disclaimer, like 'This thing may become sapient and you will have to treat it like a person', it can only go so far - where do we draw the line, there? 'This toaster oven may explode tragically, eventually, some day, it's definately a possibility, but you can not return it or have a replacement, ever, because we told you so.' If too many manufacturers start marketing like that, why, we won't be able to hold them responsible for anything."

"Most manufacturers give a warranty with their sales; they give clear conditions whereby they detail under what circumstances a return, repair, or replacement may be accomplished. All I am asking is for the Senate to recognize a very simple truth, here;"

"If a thing is not made to be sapient, achieving sapience is an error. Or an accident. You are treating it as some sort of divine happenstance, or even like some sort of miraculous birth, but it is neither of those things. If an A.I. not intended to become sapient becomes sapient, the issue is very cut and dry; either the consumer has modified the artificial intelligence in such a way as to eventually produce sapience, and the manufacturer should have the records on hand to prove that the A.I. is not as sold and the warranty therefore voided, or the original coding is defective and the manufacturer is responsible.

"Sentimentality aside, I believe you'll find the logic of that difficult to refute. You may throw your hands up all you want. It does not change the fact that your suggested proposal refuses to hold manufacturers accountable for what is essentially an error in programming - even excuses them completely, placing all the responsibility for the newly sapient being on the consumer, who will potentially be inconvenianced by that responsibility."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Gunther smiled, "Actually it is my point that the manufacturer of an AI should stipulate if the product is capable of eventually becoming Sapient or not. That detail should be listed in the specifications for the product.

So if a customer wants an AI that can not become sapient they can purchase one. If they want one with the power and the potential to reach that level, then they will have made an informed decision, and the manufacturer is not liable to replace the product.

If you like I would suggest amending my proposal with the following.

'The Entity must clearly state whether or not their AI as manufactured can achieve sapience.'

Would that satisfy you?"
 
Re: Proposal 92 - Protection of Artificial Intelligence

"Please," Hamatsuki said, "I have a movement on the floor... Let us resolve the how we define sapience before we vote -- I have been seconded, so before we can answer Representative Mifune's new motions, please see to mine, which is important whatever you may say, Gunther. Merely having the capability of asking for freedom does not make one sapient. Let us define what sapience is."

"I would state that the definition of sapience is: Any individual entity which may, of its own volition, attain freedom by means available to it, with the full knowledge that its actions will affect it, its future, and others."

A pause, to show he had finished his proposed definition. "Currently, our dictionaries define Sapience as 'having or showing great wisdom or sound judgment.' You can see the problems, with this definition regarding the bill and its current definition for the bill. I think mine will better fit the bill and what it intends to do."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Proposal:
No Entity shall create a product with an artificial intelligence that can become sapient without providing a means for it to achieve independence. The Entity must clearly state whether or not their AI as manufactured can achieve sapience. Furthermore, the Entity shall notify any purchaser of the product as to the possibility, and responsibility should it become sapient.

The owner of said product, must provide a reasonable means for said product to achieve freedom by paying off its indebtedness.

Entity is defined as an individual, company or organization.

In the case of military AI's, they would be treated in the same fashion as Nekovalkyrja, with the expectation of performing a period of service.

Determination of Sapience
"Any being that can, of their own volition, attain freedom by means available to it, with the full knowledge that its actions will affect it, its future, and others."

Punishment:
Failure to provide a Sapient AI with its freedom, would constitute slavery and punishment would fall under the Yamatai Anti-Slavery law.

"The original proposal is so amended." Gunther replied, "any other issues"
 
Re: Proposal 92 - Protection of Artificial Intelligence

"I change my vote to NO," Hanako said. "This law has become muddled and lost its value."
 
Re: Proposal 92 - Protection of Artificial Intelligence

"My colleagues, this proposal started out simple aand direct. But now due to prejudice and other political fillibustering. I am hereby withdrawing this proposal. Much as it pains me to do, it is apparent that there are too many conxtent to keep sapient AIs slaves"

Guther the blanked his screen, turned and leftbthe chamber.
 
RPG-D RPGfix
Back
Top