كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!

إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:

عدد الفرص التي تم تصفحها

عدد الطلبات التي تم تقديمها

استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!

هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟

اضغطي هنا لاكتشاف الفرص المتاحة الآن!
نُقدّر رأيكِ

ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.

هل ترغبين في المشاركة؟

في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.

ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.


https://bayt.page.link/2hhP6Bpax3cLYKwp6
العودة إلى نتائج البحث‎
خدمات الدعم التجاري الأخرى
أنشئ تنبيهًا وظيفيًا لوظائف مشابهة
تم إيقاف هذا التنبيه الوظيفي. لن تصلك إشعارات لهذا البحث بعد الآن.

الوصف الوظيفي

Job Requisition ID #
















26WD95302

Position Overview
 


The Applied AI team in Autodesk’s Data and Process Management (DPM) organization ships cloud-native services that power AI agents and AI-driven workflows that make our Product Data Management (PDM) and Product Lifecycle Management (PLM) workflows smarter and easier.
 


We’re hiring a QA/SDET who is automation-first and production-minded. You’ll start by owning quality for cloud services and AI-integrated features, then grow into owning our AI evaluation pipelines (built on internal frameworks and Opik) over the next 2–3 quarters.
 


You don’t need to be an ML expert on day one—but you do need strong software engineering fundamentals, comfort working with distributed systems, and curiosity to learn AI-specific quality evaluation patterns and tools.


Responsibilities
 


Build and maintain automated tests for cloud-native services: API/contract tests,, and end-to-end workflow tests


Validate non-functional requirements: performance, resiliency/failure modes, multi-tenant behavior, and observability-driven debugging (logs/metrics/traces)


Partner with engineers and PMs to define acceptance criteria and quality gates for releases


Develop and maintain scenario-based regression suites for AI-integrated workflows (multi-step tasks, tool calls, retrieval-backed behaviors)


Build and operationalize evaluation pipelines using internal frameworks and evaluation tools like Opik


Curate and maintain “golden” datasets (test cases, expected behaviors, labels/metadata)


Automate Agent evaluation runs (CI, scheduled runs, and/or sampled runtime evaluation)


Publish results to dashboards and establish alerting for failures/regressions


Security-aware testing for AI surfaces: include abuse cases (e.g., prompt-injection style attempts, unsafe tool execution paths, sensitive-data leakage checks) and verify guardrails/controls


AI-assisted delivery: Use AI coding agents to accelerate delivery of tests and automations


Minimum Qualifications
 


Bachelor’s or Master’s degree in Computer Science, Software Engineering, or equivalent practical experience


4+ years experience as a QA Engineer, SDET, or Software Engineer with substantial test automation ownership


Strong programming skills in Python and/or TypeScript/Java; you write maintainable automation code, not just scripts


Experience testing cloud-native distributed systems (REST/GraphQL APIs, async workflows, service-to-service integrations)


Proven verification habits: test design, CI hygiene, disciplined incremental delivery, and strong debugging skills


Comfort operating production-like systems: reading telemetry, reproducing issues, triaging failures, and driving fixes with engineers


Strong communication: you can document test strategy, influence quality gates, and collaborate cross-functionally


Experience with testing AI-integrated systems in production (any of):


LLM feature regression testing, prompt/version change validation


RAG-style workflows (retrieval quality checks, grounding/citation checks, data freshness)


Tool-use / agentic workflows (validating tool-call sequences and failure recovery paths)


Demonstrated experience using AI coding tools to develop tests for production systems, and the engineering judgment to verify and correct AI output (code review rigor, debugging skill, ownership of correctness)


Preferred Qualifications
 


Familiarity with evaluation tooling (Opik, Langfuse, or similar), dataset versioning practices, and automated evaluation runs


Experience with performance testing and resiliency patterns (rate limiting, retries/idempotency validation, chaos/fault testing)


Security-minded testing experience, especially for systems that integrate external tools/data sources


#LI-KS2


Learn More


About Autodesk


Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.


We take great pride in our culture here at Autodesk – it’s at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.


When you’re an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!


Salary transparency


Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.

Diversity & Belonging
We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging


Are you an existing contractor or consultant with Autodesk?


Please search for open jobs and apply internally (not on this external site).


لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.

لقد تجاوزت الحد الأقصى المسموح به للتنبيهات الوظيفية (15). يرجى حذف أحد التنبيهات الحالية لإضافة تنبيه جديد.
تم إنشاء تنبيه وظيفي لهذا البحث. ستصلك إشعارات فور الإعلان عن وظائف جديدة مطابقة.
هل أنت متأكد أنك تريد سحب طلب التقديم إلى هذه الوظيفة؟

لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.