Predictions and facts: Considerations on artificial intelligence and beneficial ownership transparency reforms

Photo by Growtika on Unsplash.
Introduction
In a world where opaque predictions of computer models are increasingly used to influence decision-making, the role of registers as holders of fact – that is, accurate representations of their subject, rather than a best guess – becomes ever more important.
In this piece, we consider how implementers of beneficial ownership transparency (BOT) reforms could think through how artificial intelligence (AI) may impact their work in the coming years. One of the key things Open Ownership does as an organisation is help governments design and build effective beneficial ownership (BO) registers for legal vehicles. This piece therefore focuses on the potential impacts of AI on this, rather than the ways in which AI could help data users use, improve, or combine non-official data sources (e.g. leaked information). We also attempt to stay rooted in technologies that exist now rather than speculating on where technology may go.
First, we break down what people mean when they use the term AI. Then, we set out a framework that might help a registrar think through how AI technologies could impact their work, including how they might use it themselves, how they might respond to others’ use of it, and how they might shape its future use in this domain.
Summary of practical steps
If you do not have the time to read this in full (or you’re going to ask ChatGPT to summarise it for you), in short, we recommend the following steps to implementers of BOT reforms:
- Review internal processes to see where there are tasks that might be suitable for testing AI tools and design small, measurable experiments in order to test whether these improve outcomes without causing undue harm. These could include, for example: repetitive processes; tasks that involve interpreting or producing large amounts of text; and decision-making that requires or would be helped by the interpretation of large amounts of data.
- Know that your colleagues will likely be using AI tools, such as chatbots, in their work, and develop policies to define acceptable use of these.
- Keep in mind that users of your data may be feeding this into their own models or automated decision-making tools, and ensure that the information you provide is usable and accurate to maximise the probability of robust outputs.
- Consider how developments in the use of AI in the ownership and control of legal vehicles may affect BOT reforms, and ensure verified human individuals are identified as beneficial owners of legal vehicles.
What we mean when we say AI
When people use the term AI, they are typically referring to one of a few different things:
- Classic rules-based algorithms: The type of “if... then” logical rules that can be designed to represent rules-based decisions or a set of instructions. This type of processing is used, for example, to check whether the information entered in a webform is valid. In itself, this should not be described as AI, but it is used within all AI technologies.
- Machine learning: The production of foundation models based on a large corpus of data. Large language models (LLMs) which power chatbots are an example of machine learning. In this case, the data are words, sentences, and conversations. Computers (machines) have been trained by processing that data.
- Generative AI: The use of foundation models to generate plausible outputs. The responses from a chatbot are an example of generative AI. Foundation models can also be trained to generate images or sounds if they are trained on those inputs. The outputs of generative AI may be correct, relevant, and factually consistent, or they may be hallucinations, meaning the output is nonsensical or wrong. This is because generative AI tools, except in very well-bounded areas such as mathematics, have no ability to judge or produce correctness or truth.
- AI agents: Software applications which combine foundation models and interfaces to other applications. They are built so that human inputs and instructions, or inputs from sensors and other systems, can produce outputs from a target software system. They can be used to automate digital workflows.
- Artificial general intelligence (AGI): The idea that the technologies above will become so developed and self-sufficient that they will reach human-level general intelligence and even gain consciousness. AGI does not exist, and its possible future existence is the subject of speculative fiction.
This piece covers a number of these AI tools and approaches, and tries to make clear when it is referring to which.
How might AI affect beneficial ownership transparency?
To consider the potential effects that advances in these technologies might have, we recommend those implementing BO reforms to consider the following three questions:
- How could I use AI to improve internal processes and services offered?
- How should I respond to the ways others are using (or may use) AI in ways that impact my work?
- How could I shape the way AI is used in the field of BOT for the future?
In this piece, we will consider some potential answers to these three questions. Above all, we would encourage implementers to spend time reflecting on these questions themselves.
When addressing these questions, it is important to consider the ethical implications of AI, and how legislation and government priorities in other areas might conflict with the use of AI technologies. These could include, but are not limited to:
- intellectual property concerns around the use of data to train AI models;
- environmental impacts based on the large amounts of energy required to train and run these models, and natural resources required to build and run data centres;
- concerns around the deployment of black-box AI models that do not enable the tracing of how certain recommendations or decisions have been reached;
- training models based on biased datasets, risking an entrenchment of existing societal inequalities;
- accusations of exploitation of the human workers who are required to train and tweak the models these technologies deploy.
It is also key to remember that these technologies are still finding their place in the world. The business models underpinning the most popular products are not solid, so governments should be wary of relying too heavily on them. Pricing models may change, and the continued existence of the products themselves is not guaranteed.
Use
How could I use AI to improve my processes and decision-making towards achieving my policy goals?
While many core functions of corporate registries – e.g. collecting and storing data, and granting access within agreed regulatory frameworks – do not lend themselves naturally to algorithmic decision-making, processing large amounts of information, or producing significant amounts of text, there are some tasks that AI could potentially be a help in addressing. These include:
- Producing guidance in a range of formats to help users understand their obligations: People submitting BO declarations can have trouble understanding their obligations under legislation. LLMs could come in use here. A low-risk use case of AI tools might involve feeding in the relevant pieces of legislation, policy, and guidance, and rapidly producing draft guidance in a variety of formats for a broad range of different types of legal entities and scenarios. These drafts should then be checked for accuracy by human staff members. There is also some interest in making use of AI tools, such as a custom chatbot to provide advice to businesses or other registry customers, but given the potential for hallucinations in chatbot responses, in a regulated context this is highly risky.
- Raising red flags for verification of BO declarations: Machine learning models could be trained on relevant information (for example, previously flagged declarations) to aid in verification processes that registries are required to undertake. In such cases, use of machine learning could generate an initial list for further review by human staff, reducing workload and enabling staff time to target the highest-risk declarations. The outputs of these reviews could then be used to further refine the model, shaping it to operate more accurately within that particular registry’s context. This would require, at a minimum, sufficient, well-structured data from the declaration itself, metadata on the declaration, and data on the human feedback (i.e. whether or not the human reviewer agreed the declaration was suspicious).
- Digitising historical company records: Many of the registrars we work with have large numbers of historical records that require digitising. Automated digitisation tools such as optical character recognition (OCR) are normally used for such processes. Some OCR tools use machine learning, but not all do. Depending on the nature of the task, registrars could consider using OCR tools, keeping in mind that the accuracy of such tools is not guaranteed.
When considering the use of commercially available LLMs, it is important to remember that information uploaded to these (for example, internal policy documents or customer data) is by no means guaranteed to be secure. It is likely that implementers’ colleagues will want to explore the use of these tools, and it is crucial to make them aware of the risks and have clear guidelines for use in place. If registrars choose to use AI, they should consider whether this mandates transparency about how it is used under relevant principles, policies, or legislation, particularly where personal data and decision-making is concerned.
Respond
How should I respond to the ways others use (or may use) AI in ways that impact my policy goals?
There are two main groups that implementers need to keep in mind when considering this question: those making BO declarations (declarants), and those using the information submitted for various purposes (data users).
Below, we consider a few potential applications of AI in BO reforms for each user group, as well as responses implementers may adopt.
Declarants
Most countries with BO registers require legal vehicles (or agents on their behalf) to make declarations based on a domestic definition of beneficial ownership. To do so, declarants usually have to fill out an online form to a central register. The following list includes some ways that AI could be used by declarants during this process, and how agencies receiving declarations might respond:
- Determining beneficial owners of their companies: Chatbots could be used to help a declarant determine the beneficial ownership of a company by asking it to interpret and apply particular pieces of guidance and legislation.
- Implementer response: Continue using and strengthening existing methods of verifying information that is filed, which should already be designed to catch accidental or intentional errors.
- Incorporating companies and filing BO declarations: It is feasible that AI agents could assist in the interaction with the register’s systems by being instructed to create companies or submit declarations based on the information provided.
- Implementer response: The response should be informed by the considerations behind any type of automated submission process. If a registrar wishes to support the use of AI in this way, making declarations possible via API would likely make this significantly more accurate. If a registrar wants to prevent the use of AI in this way, CAPTCHAs and other bot-prevention tactics might be more suitable. It might also mean building checks by humans into the process to ensure that too much control is not transferred to autonomous agents.
- Identity verification: Generative AI makes the production of fraudulent documents significantly easier, and might even make it possible to bypass facial recognition technology. This could make the verification of identities considerably harder.
- Implementer response: Broadly, identity verification is its own area of practice, and we recommend looking at domestic and international initiatives on this front rather than a corporate register trying to solve this on its own. Registrars could consider investing in prevention technologies, although this may require continuous investment as AI may develop to circumvent these technologies. Another approach could be to require in-person verification of such documents, which could be outsourced to responsible agents (for example, jurisdictions like Slovakia and the United Kingdom involve third parties in verification mechanisms). Implementers would have to consider the additional resource and compliance cost of these approaches over digital processes.
- Advice on evasion techniques: Advanced chatbots could enable greater access to information on how to set up complex structures that enable illicit financial flows.
- Implementer response: We recommend taking an active role in monitoring risks of the use of AI to evade legal and regulatory obligations, and working with relevant agencies, networks, and actors to understand evolving risks, then reviewing and adapting regulations accordingly.
In short, our core recommendation for registrars is to ensure that existing processes for verification and enforcement are sound, and that they have sufficient capacity to monitor trends and patterns in company activity within their jurisdiction, such that any abuse of their services is surfaced.
Data users
The other key group of stakeholders for corporate registrars is data users – those who use information about the BO of legal vehicles. The use of BO data is necessary to realise the impact of BOT reforms. Users can include financial intelligence units, procurement authorities, tax agencies, law enforcement agents, banks, journalists, and civil society, among others. These groups will also likely be looking at how to adopt AI approaches in their work, and data from BO registers will likely be inputs into these models.
Data users are likely to use AI with BO data in two ways: to improve it, if data is poor quality; or to feed it into their own algorithmic decision-making processes. Both are explained in more detail below:
- Improving data: Machine learning approaches can be very helpful in a number of data-processing activities relevant to BOT. Entity resolution is a key example here, and one that we often see as necessary to conducting analyses of BO data. For example, the beneficial owner of two companies might be reported as “Miranda A. Evans” in one declaration, and in another as “Miranda Evans”. A data user does not know whether those two records refer to the same natural person, especially if common identifiers are missing (as they often are). Machine learning approaches can take contextual data from the register and other related datasets in order to identify whether the records refer to the same individual. Similarly, in the absence of common identifiers, these processes can be used to link BO data with other information on people or organisations.
- Informing decisions using data: This approach is highly context specific depending on the use case. Tax agencies could, for example, feed BO information into machine learning algorithms to identify potential cases of tax evasion or avoidance; procurement authorities for red flagging potential cases of procurement corruption or bid rigging; and financial institutions for advanced customer due diligence. In these cases, AI can help better understand and identify risk, and class cases as high risk which may warrant closer human inspection.
How should implementers respond to the use of AI?
The role of BO registers, as we have already emphasised in this piece, is to provide accurate representations of companies’ BO networks. This responsibility becomes even more important when information is used in automated decision-making algorithms on key governance functions, including taxation, public procurement, and fighting financial crime.
While machine learning algorithms could support the improvement of poor quality data to some degree, it is better to do this at source. To this end, BO information provided to data users should:
- be adequately disambiguated to enable users to easily tell if the same individual is involved in multiple companies within the dataset;
- without exception, include company identifiers to enable links with other datasets;
- with sufficient safeguards, include personal identifiers to enable links with other datasets;
- provide adequate metadata, such that data fields are clearly defined;
- verify information such that users can be confident in its accuracy;
- ensure information is up to date and includes historical information.
Advances in machine learning should also lead implementers to consider some wider topics on the provision of data, for instance:
- Bulk information: The development of machine learning algorithms requires access to information in bulk. If this is something implementers want to support, they should consider providing this access to the user types that need it.
- Licensing: Implementers should consider whether their current data license allows for information to be used to train AI models, and whether they want it to be. If the information is made available online, it is probably already being used for this purpose.
Shape
How could I shape the way AI is used in the field of BOT for the future?
The final consideration for implementers is by far the most complex. It looks at how corporate registrars could shape the way that AI is used in their policy domain by considering how it may be used in the ownership and control of legal vehicles.
To our knowledge, there is not currently any precedent of AI owning and controlling a company without human involvement, though others have considered what impacts advances in AI might have on BOT. There are interesting cases in adjacent policy areas, including a case in South Africa on the potential for an AI tool to be considered an inventor under patent law.
Beneficial ownership of legal vehicles as a concept, in its very definition, requires the beneficial owner to be a natural person. Therefore, most domestic legal definitions preclude a computer program, algorithm, or AI to qualify as a beneficial owner. However, as most legal definitions include the element of control, an increased use of AI in business operations and decision-making may impact which natural persons are reportable under domestic legal definitions. AI may be able to be in a position of effective control in legitimate companies. Advances in AI may further reinforce the importance of more robust verification of information about beneficial owners – both the identity of natural persons declared as beneficial owners and their status as beneficial owner – or other important positions within a company (e.g. directors).
Legal ownership may become a murkier area. Assets (including company shares) can be legally owned by any natural or legal person. While some jurisdictions in the United States of America have passed bills to prevent AI from acquiring legal personhood, others including Wyoming have taken steps to recognise distributed autonomous organisations (DAOs) as legal entities, conferring legal personality onto organisations that may be “algorithmically managed”. While these entities are not widely recognised in other jurisdictions, where these entities can own company shares and other assets will likely have a bearing on BOT reforms. There may be complications if different jurisdictions choose to take different approaches. For example, authorities in one jurisdiction may need to grapple with how to apply BO disclosure requirements to entities that are effectively controlled, or legally owned, by AI if this is legalised in other jurisdictions.
Allowing algorithms to spawn legal vehicles and subsequently control their activities may run counter to the objectives of many BOT reforms to date: to curb the abuse of legal vehicles and hold individuals accountable. Jurisdictions may need to ensure that legal vehicles are anchored in the real world, for instance, by requiring that directors and beneficial owners are verified and identified as humans.
Automation of company formation, while potentially beneficial in creating a more frictionless business environment, is not without the potential for abuse. Thus, regulators and policymakers must think carefully about whether to allow this or to maintain (or introduce) layers of human control and accountability.
Conclusion
Advances in AI approaches make the role of corporate registrars as holders of high-quality information on the natural persons owning and controlling companies even more important.
Many of the fundamentals of effective BOT implementation still apply – or become even more important – with advances in AI tools. These include:
- good verification, particularly of individuals;
- bulk access to high-quality data;
- beneficial owners being strictly defined as natural persons.
Implementers should consider how they could use new technologies in their work to improve internal processes and the services they offer users, with all necessary caveats about the risks and ethical implications of using them. They should also be aware of how their key stakeholders might be feeding BO data into models of their own, and ensure their legislative framework is robust enough to ensure advances in AI do not undermine the effectiveness of reforms.
Related articles and publications
Publication type
Emerging thinking
Blog post
Sections
Technology