Here you will find answers to commonly asked questions about Beamery's use of artificial intelligence (AI). For a more detailed look at where Beamery uses AI, click here.
Q: Does Beamery’s use of AI increase the risk of data protection breaches?
No. The data in the algorithm does not use personal identifiable information (PII) and will not lead to any increased risk in terms of privacy.
Q: What dataset was used to train the algorithm?
A one-time transfer of the latest Taleo data will be done in Beamery. Taleo is an established tool trusted in many industries including the Talent Acquisition industry for many years. Beamer has taken appropriate steps to ensure that the data is representative of the population, up-to-date, and of the highest quality.
Q: How can we know whether the recommendations made are biased by gender, race, age, country, education, background, etc.?
Data is trained on non-personal or discriminating data. Bias checks are done in-house and by a 3rd party. Feedback on the recommendation is captured via the UI and is revised by the team and any model retraining is reviewed by the data science team and done monthly if needed.
Q: How many individual profiles was Beamery’s AI trained on?
Beamery's AI is trained on 600+M profiles of people.
Q: How often are your data models retrained?
Models are retrained to adapt to feedback on a monthly basis.
Q: Has Beamery taken any steps to protect the AI model from outside attacks?
The model architectures are only available to Beamery’s Machine Learning research team. There are no public APIs for attackers to access.
Additionally, the available models do not take free text input which is more open to adversarial attacks. Input to the models goes through standardization and reconciliation steps before reaching the models as inputs.