Documentation

Learn how to use MEDMAPPER AI STUDIO without exposing internal implementation details.

This documentation is intentionally product-usage focused. It covers onboarding, project setup, review workflow, outputs, and deployment choices without exposing proprietary design plans, private APIs, or internal operational details.

Getting started

What MEDMAPPER AI STUDIO is for

MEDMAPPER AI STUDIO helps healthcare data teams move from source-system schemas to reviewed, validated target-model outputs. It is designed for workflows where speed matters, but so do evidence, auditability, and human review.

The platform is a fit when your team needs to map healthcare data into standard or custom target models such as OMOP, PCORnet, FHIR, or other governed analytical structures. It is also a fit when stakeholders need to inspect how suggestions were made before trusting them downstream.

  • Use it to accelerate schema-to-model mapping with review controls.
  • Use it when multiple reviewers need a shared workflow for evidence and approval.
  • Use it when downstream outputs must reflect only approved, validated work.

Create your first project

How to start a project in MEDMAPPER

  1. Create a new project and give it a clear name tied to the source system and target model.
  2. Choose the source system or warehouse you want to analyze.
  3. Select the target model your team needs to support.
  4. Verify the connection and scope the source tables or schemas you want to include.
  5. Let the platform run discovery so the project is grounded in the actual schema structure.

After setup, the project moves into schema discovery, domain detection, join inference, and mapping review. The exact timing depends on the source complexity and your review posture, but the product experience stays centered on one governed workflow.

Review workflow

How teams review mappings and joins

MEDMAPPER is designed around the idea that AI suggestions should be explainable and reviewable. The platform surfaces candidate joins, mapping proposals, and validation findings in workflows built for structured review rather than black-box automation.

Join evidence

Inspect compatibility, relationship plausibility, and supporting signals before accepting joins.

Mapping Workbench

Review source expressions, confidence states, transform intent, and status in one dense surface.

Validation and lineage

Trace how source-to-target paths work and resolve blocking issues before delivery.

In practice, most teams use MEDMAPPER to focus human review where the risk is highest while allowing stronger evidence-backed suggestions to move faster through the workflow.

Outputs and delivery

How delivery works

MEDMAPPER produces downstream outputs only after the relevant work has been reviewed and validated. The platform is designed so delivery reflects governed project state rather than incomplete drafts.

  • Use SQL outputs when downstream engineering teams need reviewed transformation logic.
  • Use dbt-oriented outputs when your delivery path benefits from analytics engineering workflows.
  • Use GitHub publishing workflows when your organization wants repository-based review and delivery.

The practical rule for users is simple: move through review and validation first, then generate and publish outputs once the project is ready for delivery.

Choose an LLM

How to choose an LLM for mapping work

When teams evaluate an LLM for MEDMAPPER workflows, the goal is not to find the most creative model. The goal is to find a model that behaves predictably in schema interpretation, produces structured outputs consistently, and fits the deployment and governance requirements of the environment.

Optimize for reliability

Prefer models that follow instructions consistently, handle structured schema context well, and are less likely to invent unsupported mappings or relationships.

Match the security posture

Choose a provider and deployment path that fits your organization’s data-handling requirements, review standards, and enterprise approval process.

  • Test models on representative schema metadata, not raw patient data.
  • Compare models on mapping quality, consistency, latency, and review burden.
  • Favor models that work well with structured prompts and deterministic validation steps.
  • Re-evaluate choices when your target models, review standards, or deployment constraints change.

In practice, many teams start by selecting the LLM provider that best fits their compliance and deployment posture, then narrow the decision by evaluating how well a short list of models performs on real schema-to-target-model review tasks.

Deployment options

Choose the deployment model that fits your environment

SaaS

Best when you want faster onboarding, a centralized product experience, and a direct path to evaluation for mapping and review workflows.

Customer-deployed data plane

Best when your organization needs stricter runtime boundaries while keeping a common MEDMAPPER workflow for discovery, review, and delivery.

This public documentation does not expose low-level architectural controls or private implementation specifics. For deployment evaluation, use a product demo or architecture review with your team.

FAQ

Common questions

Does this documentation expose proprietary implementation details?

No. It is limited to safe, public-facing product usage guidance.

Can I learn the platform workflow from this page alone?

Yes. This page is meant to help teams understand how to get started and how the review-driven workflow operates.

Where should my team go for deeper architecture review?

Use the product demo and security review paths rather than relying on public documentation for environment-specific decisions.