search
Beamery AI FAQ

Last Updated:

Here you will find answers to commonly asked questions about Beamery's use of artificial intelligence (AI). For a more detailed look at where Beamery uses AI, click here

Q: Does Beamery’s use of AI increase the risk of data protection breaches?

No. The data in the algorithm does not use personal identifiable information (PII) and will not lead to any increased risk in terms of privacy. 

Q: What dataset was used to train the algorithm?

Beamery used public data to train our graph learning algorithm. The dataset consists of ~2m people profiles with their HR data including - historical roles, skills, education, industries, etc. We use ~300k HR entity nodes which generate ~100M connections. The model generates representations that allow us to compare HR entities in downstream models.

Q: How can we know whether the recommendations made are biased by gender, race, age, country, education, background, etc.?

Data is trained on non-personal or discriminating data. Bias checks are done in-house and by a 3rd party. Feedback on the recommendation is captured via the UI and is revised by the team and any model retraining is reviewed by the data science team and done monthly if needed.

Q: How many individual profiles was Beamery’s AI trained on?

Beamery's AI is trained on ~2m profiles of people.

Q: How often are your data models retrained?

Models are retrained when we update our taxonomy or other upstream models (e.g. reconciliation).

Q: Has Beamery taken any steps to protect the AI model from outside attacks?

The model architectures are only available to Beamery’s Machine Learning research team. There are no public APIs for attackers to access. 

Additionally, the available models do not take free text input which is more open to adversarial attacks. Input to the models goes through standardization and reconciliation steps before reaching the models as inputs.