We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

AI Information Security Engineer

Southern New Hampshire University
$94,130.00 - $150,634.00
medical insurance, parental leave, paid time off, paid holidays
United States, Arizona, Phoenix
Apr 02, 2026
Description

Southern New Hampshire University is a team of innovators. World changers. Individuals who believe in progress with purpose. Since 1932, our people-centered strategy has defined us - and helped us grow a team that now serves over 180,000 learners worldwide.

Our mission to transform lives is made possible by talented people who bring diverse industry experience, backgrounds and skills to the university. And today, we're ready to expand our reach. All we need is you.

Make an impact - from near or far

At SNHU, you'll have the option to work remotely in the following states: Alabama, Arizona, Arkansas, Delaware, Florida, Georgia, Hawaii, Idaho, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Mississippi, Missouri, Nebraska, New Hampshire, New Mexico, North Carolina, North Dakota, Ohio, Oklahoma, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, West Virginia, Wisconsin and Wyoming.

We ask that our remote employees have access to a reliable internet connection and a dedicated, properly equipped workspace that is free of distractions. Employees must reside in, and work from, one of the above approved states.

The opportunity

The AI Security Engineer will report to the Director of Information Security Engineering. You will be hands-on security engineering role responsible for securing AI systems, models, and agent-based workloads throughout development. You will focus on practical threat modeling, control implementation, testing, and monitoring to protect AI training data, inference pipelines, agents, tools, and generated outputs from misuse, compromise, or unintended harm. You will partner with AI engineering, platform, and security teams to ensure AI systems are secure by design, resilient to misuse, and observable in production at scale.

You will work 100% remotely from any of our approved states. #LI-Remote

What You'll Do:

  • Document AI system components and data flows, including prompts, context, embeddings, training data, model artifacts, outputs, and agent tool interactions.
  • In collaboration with the AI team, identify attack surfaces, trust boundaries, and privilege transitions within AI pipelines and agent workflows and perform structured threat modeling for AI systems during design, development, and change cycles in collaboration with the AI team.
  • In collaboration with the AI team, translate identified threats into concrete, relevant security requirements and engineering tasks in collaboration with the AI team.
  • Implement technical controls informed by established AI security frameworks (e.g., OWASP LLM Top 10, NIST AI RMF) according to compliance requirements and AI governance guidance.
  • Design, build, and maintain automated security testing for AI systems within CI/CD pipelines, supports testing for prompt injection, unsafe model behavior, misconfigured access, data exposure, and agent misuse. Ensure AI security controls are validated during build, deployment, and change cycles, with failures surfaced early to engineering teams.
  • Implement technical guardrails to protect sensitive data used by AI systems, including retrieval of augmented generation (RAG) pipelines and external data sources.
  • In collaboration with the AI Team, Design and operate controls for sensitive data identification, minimization, redaction, and leakage prevention-addressing PII and other protected data in prompts, context, embeddings, and outputs to ensure privacy preserving AI operation in production environments.
  • Design, implement, and maintain security controls across the full AI/ML lifecycle-including data ingestion, training, evaluation, deployment, inference, and CI/CD-covering model artifacts, configurations, embeddings, prompts, and deployment patterns.
  • Implement and operate runtime safeguards for AI services and agent-based systems, including input and output controls, context isolation, tool use restrictions, and abuse prevention mechanisms (e.g., rate limiting and anomaly detection), ensuring safe operation without breaking functional requirements.
  • Design security controls that balance safety, system performance, reliability, and developer usability in production of AI services.
  • Implement and operate secure identity, secrets, and access control patterns for AI services, agents, and integrations, enforcing least privilege, integrating with enterprise IAM and key management systems, and monitoring credential usage and rotation.
  • Instrument AI systems to produce actionable logging, metrics, and traces; build dashboards and alerts for detecting prompt manipulation, anomalous usage, and unexpected behavior; and integrate AI specific signals into enterprise security operations workflows.
  • Embed with AI engineering and platform teams to design and maintain technical security controls; develop reusable security components and patterns; contribute documentation and runbooks; and, in collaboration with the AI team, communicate AI security requirements and remediation outcomes to technical, non-technical, and cross functional stakeholders.

What We're Looking For:

  • 5+ years of experience in IT or cybersecurity, with engineering responsibilities (i.e. IT Security or Application Development)
  • 2 + years of experience securing AI/ML systems or adjacent domains with demonstrated application to AI workloads.
  • Experience with security engineering principles, including authentication, authorization, logging, and monitoring.
  • Experience with AI/ML concepts such as models, training data, inference pipelines, embeddings, and agent frameworks.
  • Experience modeling data flows, trust boundaries, and attack paths in AI systems.
  • Experience mitigating threats such as prompt injection, model poisoning, model theft, and data leakage.
  • Experience implementing controls such as input validation, output filtering, context isolation, and abuse detection.

We believe real innovation comes from inclusion - where different experiences, perspectives and talents are celebrated. So if you're wondering whether SNHU is right for you, take the leap and apply. You might be just the person we're looking for.

Compensation

The annual pay range for this position is $94,130.00 - $150,634.00. Actual offer will be based on skills, qualifications, experience and internal equity, in addition to relevant business considerations. We expect this position to be hired in the following target hiring range $104,012.00 - $140,723.00.

Exceptional benefits (because you're exceptional)

You're the whole package. Your benefits should be, too. As a full-time employee at SNHU, you'll get:

  • High-quality, low-deductible medical insurance

  • Low to no-cost dental and vision plans

  • 5 weeks of paid time off (plus almost a dozen paid holidays)

  • Employer-funded retirement

  • Free tuition program

  • Parental leave

  • Mental health and wellbeing resources

Applied = 0

(web-bd9584865-pqfbt)