نوع العمل : عمل كلى
الخبرة : 3-5 سنة
الراتب : not mentioned
المكان : Egypt
الخبرة : 3-5 سنة
الراتب : not mentioned
المكان : Egypt
تفاصيل الوظيفة
Governance, Risk & Compliance (GRC) Analyst (AI Training)
About The Role
We partner with the world's leading AI research teams and labs to build and train cutting-edge AI models. Right now, we're looking for experienced GRC professionals to help us develop high-quality datasets and evaluation frameworks for security and risk reasoning.
Your real-world expertise in compliance programs, security policies, audits, and risk management will directly shape how AI understands and reasons about these critical domains. This is a rare opportunity to apply your professional knowledge to one of the most consequential technology challenges of our time — on your own schedule, from anywhere in the world.
Organization: Alignerr Type: Hourly Contract Location: Remote Commitment: 10–40 hours/week
What You'll Do
Review and analyze security policies, controls, and procedures for accuracy and completeness Classify and evaluate real-world compliance scenarios across frameworks such as SOC 2, ISO 27001, and NISTAssess risk statements, control mappings, and audit-style documentation Generate and validate training and evaluation data used to improve AI reasoning in GRC contexts Provide structured, precise written feedback that helps AI systems learn from practitioner-level expertise
Who You Are
2+ years of hands-on experience in GRC, compliance, risk management, or information security Familiar with one or more major frameworks — SOC 2, ISO 27001, NIST CSF, PCI-DSS, or similar Comfortable reading, interpreting, and critiquing policy and audit-style documentation Detail-oriented with strong written reasoning and communication skills Self-motivated and reliable when working independently on asynchronous tasks
Nice to Have
Prior experience with data annotation, data quality review, or AI evaluation workflows Background in internal audit, third-party risk management, or security consulting Familiarity with AI safety or responsible AI concepts
Why Join Us
Work directly on frontier AI systems alongside top research labs Fully remote and flexible — work on your own schedule Freelance perks: autonomy, variety, and global collaboration Make a meaningful impact by teaching AI to reason about security and compliance the way real practitioners do Potential for ongoing work and contract extension
About The Role
We partner with the world's leading AI research teams and labs to build and train cutting-edge AI models. Right now, we're looking for experienced GRC professionals to help us develop high-quality datasets and evaluation frameworks for security and risk reasoning.
Your real-world expertise in compliance programs, security policies, audits, and risk management will directly shape how AI understands and reasons about these critical domains. This is a rare opportunity to apply your professional knowledge to one of the most consequential technology challenges of our time — on your own schedule, from anywhere in the world.
Organization: Alignerr Type: Hourly Contract Location: Remote Commitment: 10–40 hours/week
What You'll Do
Review and analyze security policies, controls, and procedures for accuracy and completeness Classify and evaluate real-world compliance scenarios across frameworks such as SOC 2, ISO 27001, and NISTAssess risk statements, control mappings, and audit-style documentation Generate and validate training and evaluation data used to improve AI reasoning in GRC contexts Provide structured, precise written feedback that helps AI systems learn from practitioner-level expertise
Who You Are
2+ years of hands-on experience in GRC, compliance, risk management, or information security Familiar with one or more major frameworks — SOC 2, ISO 27001, NIST CSF, PCI-DSS, or similar Comfortable reading, interpreting, and critiquing policy and audit-style documentation Detail-oriented with strong written reasoning and communication skills Self-motivated and reliable when working independently on asynchronous tasks
Nice to Have
Prior experience with data annotation, data quality review, or AI evaluation workflows Background in internal audit, third-party risk management, or security consulting Familiarity with AI safety or responsible AI concepts
Why Join Us
Work directly on frontier AI systems alongside top research labs Fully remote and flexible — work on your own schedule Freelance perks: autonomy, variety, and global collaboration Make a meaningful impact by teaching AI to reason about security and compliance the way real practitioners do Potential for ongoing work and contract extension