OpenAI has removed the clause in its user policy that prohibits using its technology for military purposes.
According to a report by the local media outlet The Intercept, OpenAI, backed by Microsoft Corporation (NASDAQ: MSFT), previously banned “activity that has high risk of physical harm,” such as weapons development, military, and war. However, the new policy still advises against using their services to harm oneself or others and cites “weapons development or use” as an example.
OpenAI described its recent policy modification as an effort to establish more precise guidelines. A spokesperson for the organization stated their goal is to develop a set of universal, easy-to-implement principles, as their tools for building GPT are now used globally by everyday users. They further noted that the principle of ‘do not harm others’ is comprehensive and easily understandable, with specific reference to weapons and harm to others as clear examples of its application.
Meanwhile, Microsoft (MSFT) recently announced a contract worth approximately $20 million with the defense company Space Force to continue working on a simulation environment for training and testing. Last year, Space Force temporarily banned web-based generative AI due to security concerns, with at least 500 individuals from Space Force using a generative AI platform called Ask Sage. Unless specifically approved, the military branch had stopped using government data to create text, images, or other media. It is unclear whether the ban on web-based generative AI remains in effect within the Space Force.
Most Commented