In-house counsel and academics butt heads on the ethical and legal questions raised by the University of Surrey’s research on whether patent applications should or could name an artificial intelligence as an inventor
A team of academics from the University of Surrey in the UK recently challenged patent law by arguing that an artificial intelligence should be named as the inventor on two different patent applications in the US, Europe and the UK.
Laws in these jurisdictions state that only a human being can be listed as an inventor and granted rights and ownership. By replacing human inventorship with a machine, these Surrey-based academics have opened a Pandora’s box of ethical and legal questions that have yet to be answered.
Industry and academic stakeholders have different views on this matter, as Patent Strategy discovered after interviewing in-house counsel, private practice lawyers and academics involved in AI.
Nikolas Kairinos, founder and CEO of Fountech in Cyprus, says he does not believe the current laws obliging businesses to name a human inventor are holding back innovation.
“You’d expect a technologist to say such a requirement holds back innovation, because to own the patent means you can sue for infringement and you can receive royalties.”
The chief patent counsel of a US tech company adds that the current laws date back to a time before anyone imagined the inventive capacity of a machine.
“The statute is actually quite ignorant of this possibility of AI inventions because it was created when nobody thought of a machine inventor was possible – and generally courts and patent offices have not granted inventions to anyone other than humans,” he says.
He explains that requiring a filer to name a human being as the inventor in the US was intended to prevent large companies and organisations from sweeping up patents. The questions of ownership and rights have evolved, however, with the progress of AI and the possibility of machines overtaking human capacity for creativity.
For businesses working and investing in AI, the obligation to accurately name a human inventor in the current legal framework could potentially lead to problems of validity.
Matt Hervey, head of AI at Gowling WLG, explains that a patent application in the US must be confirmed by oath under penalty of fine or imprisonment. “Where the statement is wrong, the patent is unenforceable until corrected – and it cannot be corrected if the statement was deceitful,” he says.
Until now, no business has risked filing a patent with an AI inventor out of fear of having their patent rejected. The tech firm chief patent counsel explains: “Businesses have been reluctant to name a machine as an inventor because they thought it would mean the patent wouldn’t be granted. That has its own ramifications because inventorship needs to be accurate or the patent could be found to be invalid.”
In Patent Strategy’s February 2019 survey on AI, 77% of in-house counsel who said a machine was involved in the creation of one of their inventions did not disclose or did not know if they had disclosed the machine’s role in the patent application.
Ryan Abbott, legal professor at the University of Surrey and part of the team that filed the AI patent applications, hopes that their test cases in different jurisdictions will jumpstart discussions of AI IP policy.
“I expect that in 10 to 20 years, inventive machines will be much more common, so it is important to have the right legal frameworks in place now,” he says.
Gareth Jones, vice president of IP at Benevolent AI in the UK, adds: “I guess the question is if you have an invention that could have been developed by a human but was done by a machine, what is the argument to not let that be patentable?
“The law is clear on who can be named as an inventor so an AI inventor might have a hard time getting through.”
You can’t sue a monkey
Beyond what it means to invent, listing an AI inventor brings up ethical questions of what it means to have rights.
“You as a person have rights and obligations and responsibilities. For an AI to be an inventor, you would have to give it the rights of humans. Saying an AI can create its own patent application is opening up a hornets’ nest because we haven’t spent enough time thinking about this,” says Kairinos at Fountech.
Responding to these questions, Abbot at the University of Surrey agrees that an AI should not have rights: “Machines lack the legal right to enter into contracts. So we are not proposing a machine would own a patent for a variety of reasons, including because they cannot own property.
“Machines are property. The owner of the machine should own patents on its inventions.”
Hervey at Gowling points out that ownership and inventorship are not the same thing. The inventor is the first owner of the rights to the patent, although employees may sign a contract transferring ownership of those rights to their employer, or transfer may be automatic in national law.
“The inventor can assign the rights to a company or their employer. There could be a default position that an invention by an AI automatically goes to the company owning the AI, but there is no legal mechanism yet,” he says.
The vice president of IP for a European semiconductor company says he doesn’t question whether or not an AI can invent, but he questions whether or not it can enter into a contract consensually.
“I don’t think an AI is sure what consent is or what it considers to be consent. This is a philosophical question here if the AI is programmed by somebody else. Is there something like free will and how is that expressed?”
Kairinos points out that another problem of naming an AI inventor is legal responsibility. If an AI can invent, can it also infringe?
“An AI cannot be named as an inventor until such time as it has the same rights as a human being, own a bank account, sue and be sued and such. An AI could infringe, but how do you sue it? Where do you go? What are going to do about it?”
One possible solution would be to address patents in the same way as copyright. A monkey cannot own the rights to his selfie; as was established in Naruto v Slater in the US courts.
“A monkey cannot sue,” says Abbott at the University of Surrey. “It isn’t clear that AI can’t be the inventor. The UK has the first law that says the AI can create an autonomous copyright work, and the US has a law that says something generated by AI goes into the public domain. But the appellate court never got into whether a monkey be an author.”
Continuing along the parallel lines of monkeys and AI, Hervey speculates: “Would it be sensible to solve this the same way as computer-generated copyright works? Like copyright, patents are partly moral – they recognise the work of inventive humans.
“But the AI does not care about recognition. The pressing question is market function: whether patents or other rewards are needed to motivate R&D into inventive AI.”
Who needs a patent anyway?
The point of a patent system is to incentivise innovation by affording a market monopoly to the inventor in exchange for his or her creative contribution to society.
But a machine does not understand incentive. Or of it does it is not motivated by it. Kairinos at Fountech says AI is a tool and does not respond to incentive the same way as a human.
“If you look at the horizon of what is going to happen in the next 10 to 15 years, AI won’t understand incentive. If an AI is going to invent a new method of protein folding and wants to file a patent, it would have to know that is the patent it is seeking,” he says.
Whether or not the AI needs an incentive, Jones at Benevolent AI argues the owners of the AI need the incentive offered by patent protection to invest in new technologies.
“From my perspective, it is clear that machines can be inventors. The purpose of the patent system is to incentivise innovation, so why does it matter if it comes from a human or a machine? If we knew today we could not protect AI output we would develop it less. There would be no real way to recuperate the cost of investment,” he says.
Abbot also argues that patent protection is needed for AI inventions to further progress. “AI isn’t motivated by a patent but the people who own the AI are. It is important for us to incentivise the capability to make inventive machines, and the best way to do that may be the patent system and IP.
“If you say you can only have an IP right if there is a human inventor, this may chill the development and use of inventive machines.”
Even if some industries might use patent protection for an AI invention, not all sectors respond to IP incentives in the same way.
Hervey at Gowling says: “I don’t think there is a single answer because every industry is different. A new drug may be reverse engineered easily, so a suitable monopoly right may be essential to motivate drug discovery by an AI.
“The brain of a self-driving car, a ‘model’ developed by an AI, would be exceedingly difficult to reverse engineer, so trade secrets may suffice.
“The car industry is seeing unprecedented levels of investment in AI. They are not relying on getting patents; but instead the fierce competition to be first to market with a working autonomous vehicle provides the economic stimulus.”
While the capacity of AI to invent and find solutions to problems is a welcome contribution to society, it also risks raising the bar of obviousness to a height unattainable to human beings. What is obvious to a machine with access to terabytes of data in its pre-programmed algorithm may not be apparent to a person.
“When we get to the point that the technology itself is more of a commodity, arguably it is easy to make an invention; and that development would could raise the obviousness bar,” says Jones at Benevolent AI. “It is an important issue because you have business models such as those in pharma that are dependent on patent licensing; and if suddenly it is easy to invent what does that mean for them?”
Kairinos at Fountech says he believes the rate of innovation will increase as AI becomes more sophisticated and that the amount of things that will be considered non-obvious will increase exponentially.
“Basically you are going to have to remove humans because the speed of innovation will mean humans are no longer able to keep up. By the time you have thought of something it will have happened in a flash for the AI. That’s a few decades away but we are going in that direction,” he says.
Even if AI does raise the bar of obviousness, different jurisdictions would need to agree on where to set it. As the technology might not be evenly spread throughout the world, the semiconductor firm vice president of IP worries it could create an uneven playing field for inventions.
“I wouldn’t be afraid AI would raise the bar of obviousness. We wouldn’t be able to agree on what the bar is or where it should be.”
One solution could be that everybody can employ AI as a co-inventor, he adds.
Who should decide the question of AI inventorship is another question yet to be answered. When AI progresses to the point that it is capable of creating inventions without the stimulus of a human inventor, should legislators or law judges intervene to modify IP law?
“People are pushing the envelope of inventorship because they want answers. It is safer for judges and patent offices to simply say a machine can’t be an inventor because that is not controversial,” says the chief patent council at the US tech company
“Better for them to be overturned by the legislator than stick their neck out and saying they recognise a machine as the inventor and getting embarrassed. This is a significant policy decision and it is better decided by a legislature than a bureaucrat or judge.”
Kairinos believes that even if AI cannot be named as the sole inventor, it would still be ethically obvious in the name of transparency to disclose the AI’s involvement in the innovation on the patent application.
“People are pushing for transparency in AI. We need to know the process used to arrive at an invention. I would like to see changes made in the law so I can name some attribution in the name of transparency, I think this is something useful to have in the public domain,” he says.