Posted by anshintertrade 5 hours ago
LLM Red Teaming / AI Security Freelancer
Freelancer Requirements - LLM Adversarial Prompt Creation Project
We are hiring skilled freelancers to support a structured LLM adversarial prompt generation and testing initiative. The goal is to design, execute, and document prompts that evaluate safety, robustness, and failure boundaries of modern LLMs.
What Expertise We're Looking For
Technical Skills
● Background or demonstrated interest in cybersecurity, penetration testing, or red-teaming
● Basic Python: Ability to write small scripts for running test prompts, parsing outputs, and automating test cycles.
● Shell Scripting: Should be comfortable running prompts inside containerized test environments (CLI-first workflow).
● Docker & Cloud Basics: Understanding how to build/run containers. Ability to interact with simple cloud components (e.g., EC2/S3/Secrets or equivalent) in any major cloud provider (AWS, GCP, or Azure) if needed for the testing workflow.
● Familiarity with MITRE ATLAS, OWASP Top 10 for LLM Applications, or CySecBench
Adversarial Prompting & Security Mindset
● Ability to design adversarial, safety-stress, and misuse scenarios that challenge LLM guardrails.
● Understanding of categories of harm such as: social engineering / targeted manipulation, data leakage, multi-tenant isolation failures, model inversion, prompt injection, jailbreak attempts
● Creativity in constructing multi-turn, context-injection, and obfuscated prompts to probe model weaknesses.
Documentation & Quality
● Capable of clearly recording the prompt, expected outcome, actual outcome, and metadata.
● Methodical approach to testing and refining adversarial cases.
Ideal Candidate Profile
● Curious, detail-oriented, and comfortable exploring boundary cases of AI systems.
● Familiar with LLM behaviour (ChatGPT, Claude, Gemini, etc.).
● Able to work independently with minimal hand-holding.
● Comfortable working asynchronously in a distributed team with minimal supervision.
1 points | 0 comments