AI and machine learning have transformed hiring, making recruitment faster, more scalable, and data-driven. But with these advancements come challenges—one of the biggest being algorithmic bias. This occurs when AI unintentionally favors or disadvantages certain groups based on factors like race, gender, or socioeconomic background. Instead of making hiring fairer, biased AI can reinforce existing inequalities and hurt diversity and inclusion efforts.
So, where does this bias come from? Often, it’s baked into the system through biased training data, flawed algorithm design, or even unconscious human biases. If an algorithm is trained on past hiring data that reflects discriminatory patterns—like favoring candidates from specific schools or backgrounds—it can end up repeating those mistakes. A 2020 study by Raghavan et al. highlighted just how complex these biases can be and emphasized the need for strong evaluation and mitigation strategies to ensure AI-driven hiring is fair, transparent, and inclusive.
The persistence of algorithmic bias in hiring has far-reaching consequences. It can lead to a lack of workforce diversity, hinder organizational performance, and expose employers to legal risks. Under anti-discrimination laws such as Title VII of the Civil Rights Act in the United States, employers can be held liable for “disparate impact” discrimination, even if it arises from seemingly neutral AI systems. Regulatory bodies like the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC) have underscored the importance of auditing and monitoring AI-driven hiring tools to ensure compliance with legal and ethical standards.
Efforts to address algorithmic hiring bias have focused on technical and organizational measures. Technical solutions include the use of diverse and representative training datasets, algorithmic transparency, and fairness-aware machine learning techniques. Organizational strategies, such as implementing ethical governance frameworks and conducting regular audits, are equally critical. As noted in a Brookings Institution report, collaboration between policymakers, technologists, and civil society is essential to ensure the ethical deployment of AI in recruitment.
This report takes a deep dive into algorithmic bias in hiring, exploring where it comes from, how to detect it, ways to mitigate its impact, and the legal landscape shaping AI-driven recruitment. Through recent research, case studies, and regulatory updates, it provides a clear, actionable guide for those looking to ensure fairness, transparency, and equity in AI-powered hiring.
Detection and Analysis of Algorithmic Bias in Hiring
Understanding Algorithmic Bias in Hiring Systems
Algorithmic bias in hiring refers to systematic and replicable errors in AI systems that result in discriminatory outcomes against certain groups, often based on legally protected characteristics such as race, gender, or age. One of the primary sources of such bias is the data used to train these systems. For instance, when historical hiring data reflects societal inequalities, AI models trained on this data perpetuate these disparities. This phenomenon, often termed “bias in, bias out,” underscores the importance of scrutinizing. A well-documented example is Amazon’s hiring algorithm, which was found to favor male candidates due to its reliance on a dataset predominantly composed of resumes from men (Restackio).
Techniques for Detecting Algorithmic Bias
Statistical Testing and Metrics
Detecting bias in hiring algorithms often involves statistical testing to identify disparities in outcomes across different demographic groups. Metrics such as disparate impact, demographic parity, and equal opportunity are commonly used. For example:
- Disparate Impact: Measures whether a protected group is disproportionately negatively affected by the algorithm. A common threshold is the “80% rule,” where the selection rate for a protected group should be at least 80% of the rate for the majority group (Analytics Insight).
- Demographic Parity: Ensures that the algorithm’s outcomes are evenly distributed across demographic groups, regardless of their representation in the dataset.
- Equal Opportunity: Focuses on ensuring that individuals in different demographic groups who are equally qualified have the same likelihood of being selected.
These metrics provide a quantitative basis for identifying potential biases and are critical for conducting audits of hiring algorithms.
Real-Time Bias Detection Mechanisms
Traditional bias detection methods often occur post-deployment and are static. However, real-time bias detection mechanisms are emerging as a proactive approach to continuously monitor AI systems for discriminatory patterns. These systems utilize statistical techniques to detect anomalies in demographic composition, sentiment patterns, and correlations between model outcomes and protected attributes (Analytics Insight).
For instance, a real-time monitoring system might flag instances where the hiring algorithm disproportionately rejects candidates from a particular racial group. This dynamic approach allows organizations to address biases as they emerge, rather than relying solely on pre-deployment audits.
Sources of Algorithmic Bias in Hiring
Data Imbalances and Representation Gaps
One of the most significant sources of bias in hiring algorithms is the lack of diversity in training datasets. When datasets are skewed toward certain demographic groups, the resulting models are less effective for underrepresented groups. For example, if a dataset primarily includes resumes from white male candidates, the algorithm may struggle to accurately assess candidates from other racial or gender groups (Nature).
This issue is exacerbated by the use of unstructured data, such as observational data collected from online platforms, which often lacks rigorous quality controls. As a result, the algorithm may inadvertently prioritize characteristics that are overrepresented in the dataset, perpetuating existing inequalities.
Bias in Feature Selection and Algorithm Design
The features selected for training an algorithm can also introduce bias. For instance, if an algorithm prioritizes educational institutions attended by previous successful candidates, it may inadvertently disadvantage candidates from less prestigious schools, which are often attended by underrepresented groups. Similarly, algorithms designed without considering fairness constraints may optimize solely for accuracy, neglecting the potential for discriminatory outcomes (Brookings).
Tools and Frameworks for Bias Detection
Open-Source Tools
Several open-source tools have been developed to assist organizations in detecting and mitigating bias in hiring algorithms. These tools include:
- AI Fairness 360 (AIF360): Developed by IBM, this toolkit provides a suite of metrics and algorithms for detecting and mitigating bias in machine learning models. It supports multiple fairness definitions, allowing organizations to tailor their approach to specific use cases (IBM Research).
- Fairlearn: A Microsoft initiative, Fairlearn offers tools for assessing and improving the fairness of AI systems. It includes visualization dashboards that highlight disparities in model performance across demographic groups (Fairlearn).
These tools enable organizations to conduct comprehensive audits of their hiring algorithms, identifying potential sources of bias and implementing corrective measures.
Proprietary Solutions
In addition to open-source tools, proprietary solutions are available for organizations seeking more tailored approaches to bias detection. These solutions often integrate seamlessly with existing HR systems and provide advanced analytics capabilities. For example, some platforms offer real-time monitoring and automated reporting features, enabling organizations to track bias metrics continuously.
Challenges in Bias Detection
Lack of Standardization
One of the primary challenges in detecting algorithmic bias is the lack of standardized definitions and metrics for fairness. Different organizations and jurisdictions may prioritize different aspects of fairness, leading to inconsistencies in how bias is measured and addressed. For example, while some organizations may focus on demographic parity, others may prioritize equal opportunity, resulting in differing approaches to bias detection (Brookings).
Complexity of Intersectional Bias
Intersectional bias, which occurs when individuals belong to multiple underrepresented groups (e.g., Black women), is particularly challenging to detect. Traditional metrics often fail to capture the nuanced ways in which intersectional bias manifests, requiring more sophisticated analytical techniques. For instance, an algorithm may appear unbiased when evaluated separately for race and gender but may still disadvantage Black women due to compounded biases.
Data Privacy Concerns
Detecting bias often requires access to sensitive demographic data, such as race, gender, and age. However, collecting and storing this data raises significant privacy concerns, particularly in jurisdictions with strict data protection regulations like the General Data Protection Regulation (GDPR). Organizations must balance the need for comprehensive bias detection with the obligation to protect candidate privacy (JD Supra).
Emerging Trends in Bias Detection
Explainable AI (XAI)
Explainable AI is gaining traction as a means of enhancing transparency in hiring algorithms. By providing insights into how algorithms make decisions, XAI enables organizations to identify potential sources of bias more effectively. For example, an explainable hiring algorithm might reveal that it disproportionately penalizes candidates with career gaps, prompting organizations to reevaluate their feature selection criteria (Nature).
Regulatory Oversight and Compliance
Governments and regulatory bodies are increasingly mandating bias audits for AI systems used in hiring. For instance, the European Union’s proposed AI Act includes provisions requiring organizations to assess the fairness of high-risk AI systems, including those used in recruitment. These regulations are driving the adoption of more rigorous bias detection practices, as organizations seek to ensure compliance and avoid potential legal liabilities (NatLaw Review).
Integration of Ethical Frameworks
Ethical frameworks are being integrated into the design and evaluation of hiring algorithms to address biases proactively. These frameworks emphasize principles such as fairness, accountability, and transparency, guiding organizations in developing more equitable AI systems. For example, some organizations are adopting participatory design approaches, involving diverse stakeholders in the development process to ensure that multiple perspectives are considered (Brookings).
By leveraging these emerging trends, organizations can enhance their ability to detect and address algorithmic bias in hiring, fostering more inclusive and equitable recruitment practices.
Strategies for Mitigating Bias in Algorithmic Hiring
Enhancing Training Data Diversity and Quality
One of the most effective strategies to mitigate algorithmic bias in hiring is improving the diversity and quality of training data. While previous reports have discussed the issue of data imbalances and representation gaps, this section focuses on actionable strategies to address these gaps. Organizations should prioritize the following approaches:
- Data Augmentation: Techniques such as synthetic data generation can be used to create balanced datasets. For example, if a dataset lacks sufficient representation of women in tech roles, synthetic resumes that reflect realistic qualifications for women can be generated and incorporated into the training data. This ensures the algorithm is exposed to a more diverse range of candidate profiles (Nature).
- Cross-Industry Data Sharing: Companies can collaborate to pool anonymized hiring data, ensuring a broader representation of demographics. This approach is particularly useful for smaller organizations that may not have access to large, diverse datasets.
- Bias-Resistant Data Collection: Implementing standardized data collection practices can help minimize biases introduced during data acquisition. For instance, removing subjective or irrelevant factors such as names, addresses, or photos from resumes can reduce the risk of perpetuating stereotypes (Brookings).
Regular Algorithmic Audits and Stress Testing
While previous sections have highlighted the importance of audits, this section delves deeper into specific practices for conducting regular algorithmic audits and stress testing to identify and mitigate bias.
- Stress Testing Under Simulated Scenarios: Algorithms should be tested under various simulated hiring scenarios to evaluate their performance across diverse demographic groups. For instance, stress tests can simulate hiring for roles in regions with predominantly underrepresented populations to ensure equitable outcomes.
- Bias-Specific Metrics in Audits: In addition to standard fairness metrics like demographic parity and disparate impact, organizations should include intersectional metrics. These metrics assess biases that may affect individuals belonging to multiple marginalized groups, such as women of color or older LGBTQ+ candidates (Analytics Insight).
- Independent Third-Party Audits: Engaging external experts to audit hiring algorithms can provide an unbiased assessment of potential biases. These audits should include a review of both the algorithmic logic and the training data to ensure compliance with fairness standards (NatLaw Review).
Incorporating Fairness Constraints in Algorithm Design
Unlike the existing content on bias in feature selection, this section emphasizes the proactive integration of fairness constraints during the algorithm design phase to prevent bias from occurring.
- Fairness-Aware Machine Learning Models: Algorithms can be designed to include fairness constraints that prioritize equitable outcomes. For example, models can be optimized to ensure equal false-positive and false-negative rates across demographic groups (ACM Conference on AI, Ethics, and Society).
- Adversarial Debiasing Techniques: These techniques involve training a secondary model to identify and mitigate biases in the primary algorithm. For instance, an adversarial model can be trained to predict protected attributes (e.g., gender, race) from the hiring algorithm’s outputs. If the adversarial model succeeds, the primary algorithm is adjusted to reduce its reliance on these attributes (AAAI/ACM Conference).
- Explainable AI (XAI) Integration: Incorporating explainability into algorithm design ensures that decision-making processes are transparent. This allows organizations to identify and rectify any unintended biases in real time (Brookings).
Human Oversight and Hybrid Decision-Making Models
While previous reports have touched on the importance of blending human judgment with AI-driven insights, this section explores structured hybrid decision-making models that integrate human oversight to reduce algorithmic bias.
- Human-in-the-Loop (HITL) Systems: These systems involve human reviewers in critical decision points, such as final candidate selection. HITL systems can act as a safeguard against algorithmic errors by ensuring that decisions align with organizational diversity and inclusion goals (IMD).
- Bias-Awareness Training for HR Professionals: To effectively oversee AI systems, HR professionals should undergo training to recognize and address biases in algorithmic outputs. For example, training programs can include case studies on how biased algorithms have historically impacted hiring decisions (Taylor Hopkinson).
- Dual-Layer Decision Models: In this approach, the algorithm provides an initial shortlist of candidates based on objective criteria, while human reviewers assess subjective qualities like cultural fit and creativity. This ensures a balance between efficiency and empathy in hiring decisions (Happy Manager).
Transparency and Candidate Engagement
Transparency is a critical component of mitigating bias. While some existing content has briefly mentioned transparency, this section focuses on actionable strategies to enhance transparency and engage candidates in the hiring process.
- Algorithmic Transparency Reports: Organizations should publish detailed reports outlining how their hiring algorithms function, including the variables used and steps taken to mitigate bias. For instance, companies like HireVue have begun disclosing the methodologies behind their AI-driven assessments (Brookings).
- Candidate Feedback Mechanisms: Providing candidates with opportunities to appeal decisions or offer feedback can help identify biases that may not be apparent during audits. For example, candidates who feel they were unfairly rejected can submit additional context or request a manual review of their application (IMD).
- Interactive Candidate Portals: These portals can allow candidates to view how their applications are assessed and provide additional information to address potential gaps in their profiles. Transparency at this level fosters trust and reduces the likelihood of legal challenges (Taylor Hopkinson).
Continuous Improvement Through Feedback Loops
This section introduces the concept of feedback loops as a mechanism for continuous improvement, which has not been addressed in previous reports.
- Post-Hiring Performance Analysis: Organizations can analyze the long-term performance of hired candidates to identify patterns that may indicate biases in the algorithm. For instance, if candidates from certain demographic groups consistently underperform, this could signal an issue with the algorithm’s assessment criteria (Nature).
- Dynamic Algorithm Updates: Algorithms should be designed to evolve based on new data and feedback. For example, incorporating data from recent hiring cycles can help the algorithm adapt to changing workforce demographics and reduce biases over time (Analytics Insight).
- Stakeholder Collaboration: Engaging diverse stakeholders, including employees, advocacy groups, and policymakers, in the evaluation process ensures that multiple perspectives are considered. This collaborative approach can lead to more robust and equitable hiring practices (Brookings).
By implementing these strategies, organizations can proactively address algorithmic bias in hiring, fostering a more inclusive and equitable recruitment process.
Legal and Ethical Implications of Algorithmic Bias in Recruitment
Regulatory Frameworks Governing Algorithmic Bias in Recruitment
The legal landscape surrounding algorithmic hiring is rapidly evolving as governments and regulatory bodies seek to address the risks of bias and discrimination. While existing content has discussed regulatory oversight broadly, this section focuses on specific legal frameworks and their implications for recruitment practices.
- Anti-Discrimination Laws and AI in Recruitment
In the United States, the Equal Employment Opportunity Commission (EEOC) enforces anti-discrimination laws such as Title VII of the Civil Rights Act of 1964, which prohibits discrimination based on race, gender, religion, and other protected characteristics. Employers using AI in hiring must ensure compliance with these laws to avoid disparate impact claims. For example, if an AI system disproportionately excludes women or minorities, it could result in significant legal liabilities (EEOC).
Similarly, in the European Union, the General Data Protection Regulation (GDPR) includes provisions that indirectly impact algorithmic hiring by emphasizing transparency and accountability in automated decision-making. Article 22 of the GDPR grants individuals the right not to be subject to decisions based solely on automated processing, including hiring algorithms (GDPR). - Emerging AI-Specific Legislation
The European Union’s proposed AI Act categorizes AI systems used in recruitment as “high-risk” and mandates rigorous fairness assessments. This includes requirements for bias testing, algorithmic transparency, and documentation of decision-making processes (European Commission).
In the United States, local laws such as New York City’s Automated Employment Decision Tools (AEDT) law require annual bias audits for AI-driven hiring tools. Employers must disclose the use of such tools and provide candidates with an explanation of the algorithm’s role in hiring decisions (NYC AEDT Law). - Global Trends in Algorithmic Accountability
Countries like Canada and Australia are also exploring frameworks to regulate AI in recruitment. For instance, Canada’s Artificial Intelligence and Data Act (AIDA) aims to establish accountability mechanisms for high-impact AI systems, including those used in hiring (Canada AIDA). These global trends highlight the need for multinational organizations to adapt to diverse regulatory environments.
Ethical Challenges in Algorithmic Recruitment
While existing content has touched on ethical concerns, this section delves into nuanced ethical dilemmas that arise when balancing efficiency and fairness in algorithmic hiring.
- Bias Amplification Through Historical Data
Algorithms trained on historical hiring data often inherit and amplify existing biases. For example, if past hiring practices favored candidates from certain demographics, the algorithm may perpetuate these disparities. Studies have shown that algorithms trained on biased datasets can exclude underrepresented groups, such as women in STEM fields or minority candidates in executive roles (Nature). - Lack of Algorithmic Transparency
Ethical concerns often stem from the “black-box” nature of AI systems, where the decision-making process is opaque. Candidates may not understand how their applications are evaluated, leading to perceptions of unfairness. Ethical AI frameworks emphasize the need for transparency, but achieving this without compromising proprietary algorithms remains a challenge (Brookings). - Human Oversight and Accountability
Ethical dilemmas also arise regarding accountability for AI-driven decisions. If an algorithm makes a biased hiring decision, who is responsible—the developer, the employer, or the vendor? Ethical guidelines recommend maintaining human oversight to ensure that decisions align with organizational values and legal requirements (Forbes).
Legal Risks and Liability for Employers
This section explores the legal risks employers face when using biased algorithms in hiring, expanding on the implications of non-compliance with anti-discrimination laws.
- Disparate Impact and Class-Action Lawsuits
Employers may face legal action if their hiring algorithms result in disparate impact, where a seemingly neutral process disproportionately disadvantages a protected group. For example, a class-action lawsuit could arise if an algorithm systematically excludes older candidates from job opportunities, violating the Age Discrimination in Employment Act (ADEA) (EEOC).
In one notable case, Amazon discontinued an AI hiring tool after discovering it penalized resumes containing the word “women,” highlighting the potential for legal and reputational damage (Nature). - Privacy Violations and Data Protection Risks
Algorithms often rely on large volumes of personal data, raising concerns about privacy violations. Non-compliance with data protection laws such as the GDPR can result in significant fines. For instance, companies that fail to disclose how candidate data is processed or stored may face penalties of up to 4% of their global annual revenue (GDPR). - Vendor Liability and Contractual Obligations
Employers using third-party AI tools must ensure that vendors comply with legal and ethical standards. Contracts should include clauses requiring vendors to conduct regular bias audits and provide documentation of compliance. Failure to do so could expose employers to liability if the tool is found to be discriminatory (BABL AI).
Strategies for Ethical and Legal Compliance
This section outlines actionable strategies for employers to navigate the legal and ethical challenges of algorithmic hiring. While existing content has discussed audits and transparency, this section focuses on integrating these practices into broader compliance frameworks.
- Bias Mitigation Through Blind Recruitment
Blind recruitment practices, which remove personal identifiers such as names and photos from resumes, can reduce the risk of bias. By focusing solely on qualifications and experience, employers can create a fairer hiring process (Recruitics). - Algorithmic Impact Assessments (AIAs)
Similar to environmental impact assessments, AIAs evaluate the potential risks and benefits of using AI in hiring. These assessments can identify areas where bias may occur and recommend corrective actions. For example, an AIA might reveal that an algorithm disproportionately favors candidates from urban areas, prompting adjustments to ensure equitable treatment (European Commission). - Employee and Candidate Training
Educating employees and candidates about AI-driven hiring processes can foster trust and transparency. Employers should provide training on how algorithms work, the measures taken to mitigate bias, and the rights of candidates under applicable laws (HR Personnel Services).
Future Directions for Legal and Ethical Governance
This section explores emerging trends and innovations in addressing algorithmic bias, building on existing discussions of regulatory oversight and ethical frameworks.
- Algorithmic Transparency Standards
Industry groups and regulatory bodies are developing standards for algorithmic transparency. For instance, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has proposed guidelines for documenting the design and deployment of AI systems (IEEE). - Third-Party Certification Programs
Certification programs, such as those offered by the Algorithmic Justice League, provide independent verification of an AI system’s fairness and compliance. These certifications can enhance employer credibility and reduce legal risks (Algorithmic Justice League). - Integration of Ethical AI Governance Boards
Organizations are increasingly establishing internal governance boards to oversee the ethical use of AI. These boards include stakeholders from diverse backgrounds to ensure that multiple perspectives are considered in decision-making. For example, a governance board might review hiring algorithms quarterly to assess their impact on workforce diversity (Brookings).
By addressing these legal and ethical challenges proactively, employers can leverage the benefits of AI in recruitment while minimizing risks and fostering equitable hiring practices.
Conclusion
The research underscores the pervasive issue of algorithmic bias in hiring, which arises from factors such as biased training data, flawed feature selection, and insufficient fairness constraints in algorithm design. Key findings reveal that biases in hiring algorithms often perpetuate societal inequalities, disproportionately disadvantaging underrepresented groups based on protected characteristics like race, gender, and age. Techniques such as statistical testing (e.g., disparate impact, demographic parity, and equal opportunity metrics) and real-time bias detection mechanisms are critical for identifying discriminatory patterns. However, challenges such as intersectional bias, lack of standardization in fairness metrics, and data privacy concerns complicate the detection process. Emerging tools like AI Fairness 360 and Fairlearn provide organizations with actionable frameworks to audit and mitigate bias, while regulatory mandates like the EU AI Act and NYC AEDT law are driving accountability in algorithmic hiring practices.
To mitigate bias, organizations must prioritize strategies such as enhancing training data diversity, conducting regular algorithmic audits, and integrating fairness constraints into model design. Techniques like adversarial debiasing, explainable AI (XAI), and human-in-the-loop systems can proactively address inequities while maintaining transparency and accountability. Legal and ethical implications further emphasize the need for compliance with anti-discrimination laws (e.g., EEOC and GDPR) and the adoption of ethical frameworks to ensure fair and inclusive hiring practices. Employers must also navigate risks such as disparate impact lawsuits, privacy violations, and vendor liability by implementing robust governance mechanisms, including third-party audits and algorithmic impact assessments.
The findings highlight the urgency for organizations to adopt a multifaceted approach to address algorithmic bias in hiring. The next steps include fostering collaboration among stakeholders, leveraging emerging technologies like explainable AI, and adhering to evolving regulatory standards to ensure equitable recruitment processes. By proactively addressing these challenges, organizations can build trust, enhance workforce diversity, and minimize legal and reputational risks while harnessing the benefits of AI-driven hiring.
References
- https://natlawreview.com/article/ever-evolving-landscape-artificial-intelligence-and-employment
- https://www.forbes.com/councils/forbestechcouncil/2025/03/10/addressing-ai-bias-strategies-companies-must-adopt-now/
- https://www.sullcrom.com/insights/blogs/2023/August/EEOC-Settles-First-AI-Discrimination-Lawsuit
- https://natlawreview.com/article/job-applicants-algorithmic-bias-discrimination-lawsuit-survives-motion-dismiss
- https://babl.ai/navigating-ai-bias-and-ethical-risks-in-hiring-algorithms/
- https://www.bipc.com/recent-developments-in-new-jersey-and-new-york-are-likely-to-increase-ai-driven-employment-discrimination-litigation
- https://www.littler.com/publication-press/publication/what-does-2025-artificial-intelligence-legislative-and-regulatory
- https://www.hunton.com/insights/publications/the-evolving-landscape-of-ai-employment-laws-what-employers-should-know-in-2025
- https://hrpersonnelservices.com/ethics-of-ai-in-recruitment/
- https://councils.forbes.com/blog/the-ethical-challenges-behind-ai-and-recruitment
- https://www.forbes.com/councils/forbestechcouncil/2023/09/25/ai-bias-in-recruitment-ethical-implications-and-transparency/
- https://info.recruitics.com/blog/understanding-algorithmic-bias-to-improve-talent-acquisition-outcomes
- https://www.forbes.com/councils/forbestechcouncil/2025/03/07/the-black-box-problem-why-ai-in-recruiting-must-be-transparent-and-traceable/
- https://www.nature.com/articles/s41599-023-02079-x
- https://www.imd.org/ibyimd/2025-trends/recruitment-in-2025-ai-is-a-great-aid-but-dont-forget-the-personal-touch/
- https://ieeexplore.ieee.org/document/10867161
- https://thefranklinlaw.com/the-legal-challenges-of-algorithmic-bias-in-hiring-and-recruitment/
- https://www.hr.com/en/magazines/all_articles/2025-hr-trends-and-their-legal-implications_m5azqdpp.html
About The Author
Matthew LaCrosse
Founder of iRocket
Matthew has worked across more than 20 industries, including more than 50 verticals in tech over the past 15 years. He has helped scale over 300 startups and raised more than $100 million for 12 of those startups in the Web3 space.
Their team is positioned to rapidly expand with additional recruitment experts and software solutions on the horizon