RAGAS_APP_TOKEN = "api_key"
RAGAS_API_ENDPOINT = "https://api.dev.app.ragas.io"Ragas API Client
RagasApiClient
RagasApiClient (base_url:str, app_token:Optional[str]=None)
Client for the Ragas Relay API.
Projects
RagasApiClient.delete_project
RagasApiClient.delete_project (project_id:str)
Delete a project.
RagasApiClient.update_project
RagasApiClient.update_project (project_id:str, title:Optional[str]=None, description:Optional[str]=None)
Update an existing project.
RagasApiClient.create_project
RagasApiClient.create_project (title:str, description:Optional[str]=None)
Create a new project.
RagasApiClient.get_project
RagasApiClient.get_project (project_id:str)
Get a specific project by ID.
RagasApiClient.list_projects
RagasApiClient.list_projects (ids:Optional[List[str]]=None, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List projects.
# Initialize client with your authentication token
client = RagasApiClient(base_url=RAGAS_API_ENDPOINT, app_token=RAGAS_APP_TOKEN)
# List projects
try:
    projects = await client.list_projects(limit=10)
    print(f"Found {len(projects)} projects:")
    for project in projects:
        print(f"- {project['title']} (ID: {project['id']})")
except Exception as e:
    print(f"Error: {e}")Found 2 projects:
Error: string indices must be integers, not 'str'
await client.create_project("test project", "test description"){'id': '26b0e577-8ff8-4014-bc7a-cfc410df3488',
 'title': 'test project',
 'description': 'test description',
 'created_at': '2025-04-10T00:12:34.606398+00:00',
 'updated_at': '2025-04-10T00:12:34.606398+00:00'}
await client.list_projects(){'items': [{'id': '1ef0843b-231f-4a2c-b64d-d39bcee9d830',
   'title': 'yann-lecun-wisdom',
   'description': 'Yann LeCun Wisdom',
   'created_at': '2025-04-15T03:27:08.962384+00:00',
   'updated_at': '2025-04-15T03:27:08.962384+00:00'},
  {'id': 'c2d788ec-a602-495b-8ddc-f457ce11b414',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-12T19:47:10.928422+00:00',
   'updated_at': '2025-04-12T19:47:10.928422+00:00'},
  {'id': '0d465f02-c88f-454e-9ff3-780a001e3e21',
   'title': 'test project',
   'description': 'test description',
   'created_at': '2025-04-12T19:46:36.221385+00:00',
   'updated_at': '2025-04-12T19:46:36.221385+00:00'},
  {'id': '2ae1434c-e700-44a7-9528-7c2f03cfb491',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-12T19:46:36.157122+00:00',
   'updated_at': '2025-04-12T19:46:36.157122+00:00'},
  {'id': 'adb45ec6-6902-4339-b05f-3b86fd256c7e',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-12T19:45:54.430913+00:00',
   'updated_at': '2025-04-12T19:45:54.430913+00:00'},
  {'id': '6f26bf5b-af4d-48b5-af2d-13d3e671bbbf',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:56:30.085249+00:00',
   'updated_at': '2025-04-11T00:56:30.085249+00:00'},
  {'id': '63e4fc0f-1a60-441b-bd71-f21ce8e35c7e',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:44:56.031721+00:00',
   'updated_at': '2025-04-11T00:44:56.031721+00:00'},
  {'id': 'db0bedd6-6cfa-4551-b1ab-af78fa82dca7',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:44:17.601598+00:00',
   'updated_at': '2025-04-11T00:44:17.601598+00:00'},
  {'id': '80c8ef9a-23d7-4a9f-a7d7-36c6472ab51e',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:42:37.287184+00:00',
   'updated_at': '2025-04-11T00:42:37.287184+00:00'},
  {'id': 'ae2a5a5c-3902-4ef6-af50-f2d8f27feea6',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:40:53.71528+00:00',
   'updated_at': '2025-04-11T00:40:53.71528+00:00'},
  {'id': '96618f8b-d3a1-4998-9a66-155f8f254512',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:31:21.410658+00:00',
   'updated_at': '2025-04-11T00:31:21.410658+00:00'},
  {'id': '4515aa23-cb4c-4c0a-b833-fefd0a30fdcc',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:27:49.977435+00:00',
   'updated_at': '2025-04-11T00:27:49.977435+00:00'},
  {'id': '138098a4-651e-4dca-b226-d70956b3e039',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-11T00:24:03.39505+00:00',
   'updated_at': '2025-04-11T00:24:03.39505+00:00'},
  {'id': 'bbe45632-3268-43a6-9694-b020b3f5226f',
   'title': 'Demo Project',
   'description': None,
   'created_at': '2025-04-10T22:41:14.663646+00:00',
   'updated_at': '2025-04-10T22:41:14.663646+00:00'},
  {'id': 'df764139-bac7-4aec-af24-5c6886189f84',
   'title': 'SuperMe-Demo',
   'description': 'SuperMe demo to show the team',
   'created_at': '2025-04-10T04:35:18.631257+00:00',
   'updated_at': '2025-04-10T04:35:18.631257+00:00'},
  {'id': 'a6ccabe0-7b8d-4866-98af-f167a36b94ff',
   'title': 'SuperMe',
   'description': 'SuperMe demo to show the team',
   'created_at': '2025-04-10T03:10:29.153622+00:00',
   'updated_at': '2025-04-10T03:10:29.153622+00:00'}],
 'pagination': {'offset': 0,
  'limit': 50,
  'total': 16,
  'order_by': 'created_at',
  'sort_dir': 'desc'}}
TEST_PROJECT_ID = "a6ccabe0-7b8d-4866-98af-f167a36b94ff"
project = await client.get_project(TEST_PROJECT_ID)RagasApiClient.get_project_by_name
RagasApiClient.get_project_by_name (project_name:str)
*Get a project by its name.
Args: project_name: Name of the project to find
Returns: The project information dictionary
Raises: ProjectNotFoundError: If no project with the given name is found DuplicateProjectError: If multiple projects with the given name are found*
await client.get_project_by_name("SuperMe"){'id': 'a6ccabe0-7b8d-4866-98af-f167a36b94ff',
 'title': 'SuperMe',
 'description': 'SuperMe demo to show the team',
 'created_at': '2025-04-10T03:10:29.153622+00:00',
 'updated_at': '2025-04-10T03:10:29.153622+00:00'}
Datasets
RagasApiClient.delete_dataset
RagasApiClient.delete_dataset (project_id:str, dataset_id:str)
Delete a dataset.
RagasApiClient.update_dataset
RagasApiClient.update_dataset (project_id:str, dataset_id:str, name:Optional[str]=None, description:Optional[str]=None)
Update an existing dataset.
RagasApiClient.create_dataset
RagasApiClient.create_dataset (project_id:str, name:str, description:Optional[str]=None)
Create a new dataset in a project.
RagasApiClient.get_dataset
RagasApiClient.get_dataset (project_id:str, dataset_id:str)
Get a specific dataset.
RagasApiClient.list_datasets
RagasApiClient.list_datasets (project_id:str, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List datasets in a project.
# check project ID
projects = await client.list_projects()
projects["items"][0]["id"], TEST_PROJECT_ID('1ef0843b-231f-4a2c-b64d-d39bcee9d830',
 'a6ccabe0-7b8d-4866-98af-f167a36b94ff')
# Create a new dataset
new_dataset = await client.create_dataset(
    projects["items"][0]["id"], "New Dataset", "This is a new dataset"
)
print(f"New dataset created: {new_dataset}")New dataset created: {'id': '2382037f-906c-45a0-9b9f-702d32903efd', 'name': 'New Dataset', 'description': 'This is a new dataset', 'updated_at': '2025-04-16T03:52:01.91574+00:00', 'created_at': '2025-04-16T03:52:01.91574+00:00', 'version_counter': 0, 'project_id': '1ef0843b-231f-4a2c-b64d-d39bcee9d830'}
# List datasets in the project
datasets = await client.list_datasets(projects["items"][0]["id"])
print(f"Found {len(datasets)} datasets")Found 2 datasets
updated_dataset = await client.update_dataset(
    projects["items"][0]["id"],
    datasets["items"][0]["id"],
    "Updated Dataset",
    "This is an updated dataset",
)
print(f"Updated dataset: {updated_dataset}")Updated dataset: {'id': '8572180f-fddf-46c5-b943-e6ff6448eb01', 'name': 'Updated Dataset', 'description': 'This is an updated dataset', 'created_at': '2025-04-15T03:28:09.050125+00:00', 'updated_at': '2025-04-16T03:52:09.627448+00:00', 'version_counter': 0, 'project_id': '1ef0843b-231f-4a2c-b64d-d39bcee9d830'}
# Delete the dataset
await client.delete_dataset(projects["items"][0]["id"], datasets["items"][0]["id"])
print("Dataset deleted")Dataset deleted
For the time being I’ve also added another option to get the dataset by name too
RagasApiClient.get_dataset_by_name
RagasApiClient.get_dataset_by_name (project_id:str, dataset_name:str)
*Get a dataset by its name.
Args: project_id: ID of the project dataset_name: Name of the dataset to find
Returns: The dataset information dictionary
Raises: DatasetNotFoundError: If no dataset with the given name is found DuplicateDatasetError: If multiple datasets with the given name are found*
await client.get_dataset_by_name(project_id=TEST_PROJECT_ID, dataset_name="test")--------------------------------------------------------------------------- DuplicateDatasetError Traceback (most recent call last) Cell In[19], line 1 ----> 1 await client.get_dataset_by_name(project_id=TEST_PROJECT_ID, dataset_name="test") Cell In[18], line 18, in get_dataset_by_name(self, project_id, dataset_name) 1 @patch 2 async def get_dataset_by_name( 3 self: RagasApiClient, project_id: str, dataset_name: str 4 ) -> t.Dict: 5 """Get a dataset by its name. 6 7 Args: (...) 16 DuplicateDatasetError: If multiple datasets with the given name are found 17 """ ---> 18 return await self._get_resource_by_name( 19 list_method=self.list_datasets, 20 get_method=self.get_dataset, 21 resource_name=dataset_name, 22 name_field="name", 23 not_found_error=DatasetNotFoundError, 24 duplicate_error=DuplicateDatasetError, 25 resource_type_name="dataset", 26 project_id=project_id 27 ) Cell In[12], line 76, in _get_resource_by_name(self, list_method, get_method, resource_name, name_field, not_found_error, duplicate_error, resource_type_name, **list_method_kwargs) 73 context = list_method_kwargs.get("project_id", "") 74 context_msg = f" in project {context}" if context else "" ---> 76 raise duplicate_error( 77 f"Multiple {resource_type_name}s found with name '{resource_name}'{context_msg}. " 78 f"{resource_type_name.capitalize()} IDs: {', '.join(resource_ids)}. " 79 f"Please use get_{resource_type_name}() with a specific ID instead." 80 ) 82 # Exactly one match found - retrieve full details 83 if "project_id" in list_method_kwargs: DuplicateDatasetError: Multiple datasets found with name 'test' in project a6ccabe0-7b8d-4866-98af-f167a36b94ff. Dataset IDs: 9a48d5d1-531f-424f-b2d2-d8f9bcaeec1e, 483477a4-3d00-4010-a253-c92dee3bc092. Please use get_dataset() with a specific ID instead.
Experiments
RagasApiClient.delete_experiment
RagasApiClient.delete_experiment (project_id:str, experiment_id:str)
Delete an experiment.
RagasApiClient.update_experiment
RagasApiClient.update_experiment (project_id:str, experiment_id:str, name:Optional[str]=None, description:Optional[str]=None)
Update an existing experiment.
RagasApiClient.create_experiment
RagasApiClient.create_experiment (project_id:str, name:str, description:Optional[str]=None)
Create a new experiment in a project.
RagasApiClient.get_experiment
RagasApiClient.get_experiment (project_id:str, experiment_id:str)
Get a specific experiment.
RagasApiClient.list_experiments
RagasApiClient.list_experiments (project_id:str, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List experiments in a project.
# create a new experiment
new_experiment = await client.create_experiment(
    projects["items"][0]["id"], "New Experiment", "This is a new experiment"
)
print(f"New experiment created: {new_experiment}")
# list experiments
experiments = await client.list_experiments(projects["items"][0]["id"])
print(f"Found {len(experiments)} experiments")
# get a specific experiment
experiment = await client.get_experiment(
    projects["items"][0]["id"], experiments["items"][0]["id"]
)
print(f"Experiment: {experiment}")
# update an experiment
updated_experiment = await client.update_experiment(
    projects["items"][0]["id"],
    experiments["items"][0]["id"],
    "Updated Experiment",
    "This is an updated experiment",
)
print(f"Updated experiment: {updated_experiment}")
# delete an experiment
await client.delete_experiment(projects["items"][0]["id"], experiments["items"][0]["id"])
print("Experiment deleted")New experiment created: {'id': 'b575c5d1-6934-45c0-b67a-fc9a4d7bdba3', 'name': 'New Experiment', 'description': 'This is a new experiment', 'updated_at': '2025-04-10T00:12:39.955229+00:00', 'created_at': '2025-04-10T00:12:39.955229+00:00', 'version_counter': 0, 'project_id': '26b0e577-8ff8-4014-bc7a-cfc410df3488'}
Found 2 experiments
Experiment: {'id': 'b575c5d1-6934-45c0-b67a-fc9a4d7bdba3', 'name': 'New Experiment', 'description': 'This is a new experiment', 'created_at': '2025-04-10T00:12:39.955229+00:00', 'updated_at': '2025-04-10T00:12:39.955229+00:00', 'version_counter': 0, 'project_id': '26b0e577-8ff8-4014-bc7a-cfc410df3488'}
Updated experiment: {'id': 'b575c5d1-6934-45c0-b67a-fc9a4d7bdba3', 'name': 'Updated Experiment', 'description': 'This is an updated experiment', 'created_at': '2025-04-10T00:12:39.955229+00:00', 'updated_at': '2025-04-10T00:12:41.676216+00:00', 'version_counter': 0, 'project_id': '26b0e577-8ff8-4014-bc7a-cfc410df3488'}
Experiment deleted
await client.list_experiments(TEST_PROJECT_ID){'items': [{'id': '78fd6c58-7edf-4239-93d1-4f49185d8e49',
   'name': 'New Experiment',
   'description': 'This is a new experiment',
   'created_at': '2025-03-30T06:31:31.689269+00:00',
   'updated_at': '2025-03-30T06:31:31.689269+00:00',
   'project_id': 'e1b3f1e4-d344-48f4-a178-84e7e32e6ab6'},
  {'id': '7c695b58-7fc3-464c-a18b-a96e35f9684d',
   'name': 'New Experiment',
   'description': 'This is a new experiment',
   'created_at': '2025-04-09T17:03:44.340782+00:00',
   'updated_at': '2025-04-09T17:03:44.340782+00:00',
   'project_id': 'e1b3f1e4-d344-48f4-a178-84e7e32e6ab6'}],
 'pagination': {'offset': 0,
  'limit': 50,
  'total': 2,
  'order_by': 'created_at',
  'sort_dir': 'asc'}}
RagasApiClient.get_experiment_by_name
RagasApiClient.get_experiment_by_name (project_id:str, experiment_name:str)
*Get an experiment by its name.
Args: project_id: ID of the project containing the experiment experiment_name: Name of the experiment to find
Returns: The experiment information dictionary
Raises: ExperimentNotFoundError: If no experiment with the given name is found DuplicateExperimentError: If multiple experiments with the given name are found*
await client.get_experiment_by_name(TEST_PROJECT_ID, "test")--------------------------------------------------------------------------- DuplicateExperimentError Traceback (most recent call last) Cell In[23], line 1 ----> 1 await client.get_experiment_by_name(TEST_PROJECT_ID, "test") Cell In[22], line 19, in get_experiment_by_name(self, project_id, experiment_name) 2 @patch 3 async def get_experiment_by_name( 4 self: RagasApiClient, project_id: str, experiment_name: str 5 ) -> t.Dict: 6 """Get an experiment by its name. 7 8 Args: (...) 17 DuplicateExperimentError: If multiple experiments with the given name are found 18 """ ---> 19 return await self._get_resource_by_name( 20 list_method=self.list_experiments, 21 get_method=self.get_experiment, 22 resource_name=experiment_name, 23 name_field="name", 24 not_found_error=ExperimentNotFoundError, 25 duplicate_error=DuplicateExperimentError, 26 resource_type_name="experiment", 27 project_id=project_id 28 ) Cell In[12], line 76, in _get_resource_by_name(self, list_method, get_method, resource_name, name_field, not_found_error, duplicate_error, resource_type_name, **list_method_kwargs) 73 context = list_method_kwargs.get("project_id", "") 74 context_msg = f" in project {context}" if context else "" ---> 76 raise duplicate_error( 77 f"Multiple {resource_type_name}s found with name '{resource_name}'{context_msg}. " 78 f"{resource_type_name.capitalize()} IDs: {', '.join(resource_ids)}. " 79 f"Please use get_{resource_type_name}() with a specific ID instead." 80 ) 82 # Exactly one match found - retrieve full details 83 if "project_id" in list_method_kwargs: DuplicateExperimentError: Multiple experiments found with name 'test' in project a6ccabe0-7b8d-4866-98af-f167a36b94ff. Experiment IDs: e1ae15aa-2e0e-40dd-902a-0f0e0fd4df69, 52428c79-afdf-468e-82dc-6ef82c5b71d2, 55e14ac3-0037-4909-898f-eee9533a6d3f, 9adfa008-b479-41cf-ba28-c860e01401ea, 233d28c8-6556-49c5-b146-1e001720c214, 6aed5143-3f60-4bf2-bcf2-ecfdb950e992. Please use get_experiment() with a specific ID instead.
Columns (for datasets)
RagasApiClient.delete_dataset_column
RagasApiClient.delete_dataset_column (project_id:str, dataset_id:str, column_id:str)
Delete a column from a dataset.
RagasApiClient.update_dataset_column
RagasApiClient.update_dataset_column (project_id:str, dataset_id:str, column_id:str, **column_data)
Update an existing column in a dataset.
RagasApiClient.create_dataset_column
RagasApiClient.create_dataset_column (project_id:str, dataset_id:str, id:str, name:str, type:str, col_order:Optional[int]=None, settings:Optional[Dict]=None)
Create a new column in a dataset.
RagasApiClient.get_dataset_column
RagasApiClient.get_dataset_column (project_id:str, dataset_id:str, column_id:str)
Get a specific column in a dataset.
RagasApiClient.list_dataset_columns
RagasApiClient.list_dataset_columns (project_id:str, dataset_id:str, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List columns in a dataset.
datasets = await client.create_dataset(
    projects["items"][0]["id"],
    "New Dataset for testing columns",
    "This is a new dataset for testing columns",
)
datasets{'id': 'cc6794e1-3505-4d5c-b403-ca7e55142bbc',
 'name': 'New Dataset for testing columns',
 'description': 'This is a new dataset for testing columns',
 'updated_at': '2025-04-16T18:05:53.249101+00:00',
 'created_at': '2025-04-16T18:05:53.249101+00:00',
 'version_counter': 0,
 'project_id': '3d9b529b-c23f-4e87-8a26-dd1923749aa7'}
# add a new column to the dataset
new_column = await client.create_dataset_column(
    project_id=projects["items"][0]["id"],
    dataset_id=datasets["id"],
    id="new_column_5",
    name="New Column 3",
    type=ColumnType.SELECT.value,
    settings={
        "width": 255,
        "isVisible": True,
        "isEditable": True,
        "options": [
            {"name": "name", "color": "hsl(200, 100%, 50%)", "value": "name"},
            {"name": "age", "color": "hsl(200, 100%, 50%)", "value": "age"},
            {"name": "gender", "color": "hsl(200, 100%, 50%)", "value": "gender"},
        ]
    },
)
new_column{'id': 'new_column_5',
 'name': 'New Column 5',
 'type': 'select',
 'settings': {'id': 'new_column_5',
  'name': 'New Column 5',
  'type': 'select',
  'width': 255,
  'options': [{'name': 'name', 'value': 'name'},
   {'name': 'age', 'value': 'age'},
   {'name': 'gender', 'value': 'gender'}],
  'isVisible': True,
  'isEditable': True},
 'created_at': '2025-04-16T18:11:14.305975+00:00',
 'updated_at': '2025-04-16T18:11:14.305975+00:00',
 'datatable_id': 'cc6794e1-3505-4d5c-b403-ca7e55142bbc'}
await client.list_dataset_columns(projects["items"][0]["id"], "271b8bc7-2d04-43b8-8960-ce20365f546b"){'items': [{'id': 'dQ7hCb1AUfog',
   'name': 'tags_color_coded',
   'type': 'select',
   'settings': {'id': 'dQ7hCb1AUfog',
    'name': 'tags_color_coded',
    'type': 'select',
    'width': 255,
    'options': [{'name': 'red', 'color': 'hsl(0, 85%, 60%)', 'value': 'red'},
     {'name': 'green', 'color': 'hsl(30, 85%, 60%)', 'value': 'green'},
     {'name': 'blue', 'color': 'hsl(45, 85%, 60%)', 'value': 'blue'}],
    'isVisible': True,
    'isEditable': True},
   'created_at': '2025-04-16T19:00:39.936764+00:00',
   'updated_at': '2025-04-16T19:00:39.936764+00:00',
   'datatable_id': '271b8bc7-2d04-43b8-8960-ce20365f546b'},
  {'id': 'eCAiMBRqm0Uc',
   'name': 'id',
   'type': 'number',
   'settings': {'id': 'eCAiMBRqm0Uc',
    'name': 'id',
    'type': 'number',
    'width': 255,
    'isVisible': True,
    'isEditable': True},
   'created_at': '2025-04-16T19:00:39.971857+00:00',
   'updated_at': '2025-04-16T19:00:39.971857+00:00',
   'datatable_id': '271b8bc7-2d04-43b8-8960-ce20365f546b'},
  {'id': 'fRegl7Ucx3Sp',
   'name': 'description',
   'type': 'longText',
   'settings': {'id': 'fRegl7Ucx3Sp',
    'name': 'description',
    'type': 'longText',
    'width': 255,
    'isVisible': True,
    'isEditable': True,
    'max_length': 1000},
   'created_at': '2025-04-16T19:00:40.055047+00:00',
   'updated_at': '2025-04-16T19:00:40.055047+00:00',
   'datatable_id': '271b8bc7-2d04-43b8-8960-ce20365f546b'},
  {'id': 'foebrzYhiu9x',
   'name': 'tags',
   'type': 'select',
   'settings': {'id': 'foebrzYhiu9x',
    'name': 'tags',
    'type': 'select',
    'width': 255,
    'options': [{'name': 'tag1', 'color': 'hsl(0, 85%, 60%)', 'value': 'tag1'},
     {'name': 'tag2', 'color': 'hsl(30, 85%, 60%)', 'value': 'tag2'},
     {'name': 'tag3', 'color': 'hsl(45, 85%, 60%)', 'value': 'tag3'}],
    'isVisible': True,
    'isEditable': True},
   'created_at': '2025-04-16T19:00:40.084457+00:00',
   'updated_at': '2025-04-16T19:00:40.084457+00:00',
   'datatable_id': '271b8bc7-2d04-43b8-8960-ce20365f546b'},
  {'id': 'ciAzRUhKct9c',
   'name': 'name',
   'type': 'longText',
   'settings': {'id': 'ciAzRUhKct9c',
    'name': 'name',
    'type': 'longText',
    'width': 255,
    'isVisible': True,
    'isEditable': True,
    'max_length': 1000},
   'created_at': '2025-04-16T19:00:40.232989+00:00',
   'updated_at': '2025-04-16T19:00:40.232989+00:00',
   'datatable_id': '271b8bc7-2d04-43b8-8960-ce20365f546b'},
  {'id': 'iAW5muBh9mc251p8-LqKz',
   'name': 'url',
   'type': 'url',
   'settings': {'id': 'iAW5muBh9mc251p8-LqKz',
    'name': 'url',
    'type': 'url',
    'width': 192,
    'position': 5,
    'isVisible': True,
    'isEditable': True},
   'created_at': '2025-04-16T20:13:09.418698+00:00',
   'updated_at': '2025-04-16T20:13:16.914367+00:00',
   'datatable_id': '271b8bc7-2d04-43b8-8960-ce20365f546b'}],
 'pagination': {'offset': 0,
  'limit': 50,
  'total': 6,
  'order_by': 'created_at',
  'sort_dir': 'asc'}}
col3 = await client.get_dataset_column(
    projects["items"][0]["id"], datasets["id"], "new_column_3"
)
col3{'id': 'new_column_3',
 'name': 'New Column 3',
 'type': 'text',
 'settings': {'id': 'new_column_3',
  'name': 'New Column 3',
  'type': 'text',
  'max_length': 255,
  'is_required': True},
 'created_at': '2025-04-10T02:22:07.300895+00:00',
 'updated_at': '2025-04-10T02:22:07.300895+00:00',
 'datatable_id': 'ebc3dd3e-f88b-4f8b-8c72-6cfcae0a0cd4'}
await client.update_dataset_column(
    projects["items"][0]["id"],
    datasets["id"],
    "new_column_3",
    name="New Column 3 Updated",
    type=ColumnType.NUMBER.value,
){'id': 'new_column_3',
 'name': 'New Column 3 Updated',
 'type': 'number',
 'settings': {'id': 'new_column_3',
  'name': 'New Column 3',
  'type': 'text',
  'max_length': 255,
  'is_required': True},
 'created_at': '2025-04-10T02:22:07.300895+00:00',
 'updated_at': '2025-04-10T02:22:11.116882+00:00',
 'datatable_id': 'ebc3dd3e-f88b-4f8b-8c72-6cfcae0a0cd4'}
await client.delete_dataset_column(
    projects["items"][0]["id"], datasets["id"], "new_column_3"
)Rows (for datasets)
RagasApiClient.delete_dataset_row
RagasApiClient.delete_dataset_row (project_id:str, dataset_id:str, row_id:str)
Delete a row from a dataset.
RagasApiClient.update_dataset_row
RagasApiClient.update_dataset_row (project_id:str, dataset_id:str, row_id:str, data:Dict)
Update an existing row in a dataset.
RagasApiClient.create_dataset_row
RagasApiClient.create_dataset_row (project_id:str, dataset_id:str, id:str, data:Dict)
Create a new row in a dataset.
RagasApiClient.get_dataset_row
RagasApiClient.get_dataset_row (project_id:str, dataset_id:str, row_id:str)
Get a specific row in a dataset.
RagasApiClient.list_dataset_rows
RagasApiClient.list_dataset_rows (project_id:str, dataset_id:str, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List rows in a dataset.
datasets["id"]'3374b891-8398-41bd-8f81-2867759df294'
await client.create_dataset_row(
    project_id=projects["items"][0]["id"],
    dataset_id=datasets["id"],
    id="",
    data={"new_column_3": "name"},
){'id': '',
 'data': {'id': '', 'new_column_3': 'name'},
 'created_at': '2025-04-16T17:46:39.100525+00:00',
 'updated_at': '2025-04-16T17:46:39.100525+00:00',
 'datatable_id': '3374b891-8398-41bd-8f81-2867759df294'}
Get a Dataset Visualized - Created From UI
Lets Create a new dataset and add columns and rows via the endpoint to see how it behaves
# generate a dataset
dataset = await client.create_dataset(
    project_id=TEST_PROJECT_ID,
    name="Dataset Visualized from UI",
    description="This is a dataset created from the UI",
)
# show url
WEB_ENDPOINT = "https://dev.app.ragas.io"
url = f"{WEB_ENDPOINT}/dashboard/projects/{TEST_PROJECT_ID}/datasets/{dataset['id']}"
url'https://dev.app.ragas.io/dashboard/projects/e1b3f1e4-d344-48f4-a178-84e7e32e6ab6/datasets/dbccf6aa-b923-47ed-8e97-bd46f2f2cee8'
# list columns
columns = await client.list_dataset_columns(TEST_PROJECT_ID, dataset["id"])
# list rows
rows = await client.list_dataset_rows(TEST_PROJECT_ID, dataset["id"])columns{'items': [],
 'pagination': {'offset': 0,
  'limit': 50,
  'total': 0,
  'order_by': 'created_at',
  'sort_dir': 'asc'}}
rows{'items': [],
 'pagination': {'offset': 0,
  'limit': 50,
  'total': 0,
  'order_by': 'created_at',
  'sort_dir': 'asc'}}
Create a Dataset from data
we want to be able to use the API with python data like this t.List[t.Dict].
# how we want the data to look
data = [
    {
        "id": "1",
        "query": "What is the capital of France?",
        "persona": "John",
        "ground_truth": "Paris",
    },
    {
        "id": "2",
        "query": "What is the capital of Germany?",
        "persona": "Jane",
        "ground_truth": "Berlin",
    },
    {
        "id": "3",
        "query": "What is the capital of Italy?",
        "persona": "John",
        "ground_truth": "Rome",
    },
]# print out column types
print([col.value for col in ColumnType])['number', 'text', 'longText', 'select', 'date', 'multiSelect', 'checkbox', 'custom']
# it should be able to handle simple python dicts
data = [
    {
        "id": "1",
        "query": "What is the capital of France?",
        "persona": "John",
        "ground_truth": "Paris",
    },
    {
        "id": "2",
        "query": "What is the capital of Germany?",
        "persona": "Jane",
        "ground_truth": "Berlin",
    },
]There can be 2 ways to pass in data
- Data can come as either as simple dicts
 
data = [
    {"column_1": "value", "column_2": "value"}
]- or if you want to give more settings
 
data = [
    {
        "column_1": {"data": "value", "type": ColumnType.text},
        "column_2": {"data": "value", "type": ColumnType.number},
    }
]# test data
test_data_columns = [
    {"name": "id", "type": ColumnType.NUMBER.value},
    {"name": "query", "type": ColumnType.TEXT.value},
    {"name": "persona", "type": ColumnType.TEXT.value},
    {"name": "ground_truth", "type": ColumnType.TEXT.value},
]
test_data_rows = [{
    "id": "1",
    "query": "What is the capital of France?",
    "persona": "John",
    "ground_truth": "Paris",
}, {
    "id": "2",
    "query": "What is the capital of Germany?",
    "persona": "Jane",
    "ground_truth": "Berlin",
}, {
    "id": "3",
    "query": "What is the capital of Italy?",
    "persona": "John",
    "ground_truth": "Rome",
}]create_nano_id
create_nano_id (size=12)
# Usage
nano_id = create_nano_id()  # e.g., "8dK9cNw3mP5x"
nano_id'Anvz5k9geU7T'
create_nano_id
create_nano_id (size=12)
# Usage
nano_id = create_nano_id()  # e.g., "8dK9cNw3mP5x"
nano_id'Anvz5k9geU7T'
Row
Row (id:str=<factory>, data:List[__main__.RowCell])
*!!! abstract “Usage Documentation” Models
A base class for creating Pydantic models.
Attributes: class_vars: The names of the class variables defined on the model. private_attributes: Metadata about the private attributes of the model. signature: The synthesized __init__ [Signature][inspect.Signature] of the model.
__pydantic_complete__: Whether model building is completed, or if there are still undefined fields.
__pydantic_core_schema__: The core schema of the model.
__pydantic_custom_init__: Whether the model has a custom `__init__` function.
__pydantic_decorators__: Metadata containing the decorators defined on the model.
    This replaces `Model.__validators__` and `Model.__root_validators__` from Pydantic V1.
__pydantic_generic_metadata__: Metadata for generic models; contains data used for a similar purpose to
    __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these.
__pydantic_parent_namespace__: Parent namespace of the model, used for automatic rebuilding of models.
__pydantic_post_init__: The name of the post-init method for the model, if defined.
__pydantic_root_model__: Whether the model is a [`RootModel`][pydantic.root_model.RootModel].
__pydantic_serializer__: The `pydantic-core` `SchemaSerializer` used to dump instances of the model.
__pydantic_validator__: The `pydantic-core` `SchemaValidator` used to validate instances of the model.
__pydantic_fields__: A dictionary of field names and their corresponding [`FieldInfo`][pydantic.fields.FieldInfo] objects.
__pydantic_computed_fields__: A dictionary of computed field names and their corresponding [`ComputedFieldInfo`][pydantic.fields.ComputedFieldInfo] objects.
__pydantic_extra__: A dictionary containing extra values, if [`extra`][pydantic.config.ConfigDict.extra]
    is set to `'allow'`.
__pydantic_fields_set__: The names of fields explicitly set during instantiation.
__pydantic_private__: Values of private attributes set on the model instance.*
RowCell
RowCell (data:Any, column_id:str)
*!!! abstract “Usage Documentation” Models
A base class for creating Pydantic models.
Attributes: class_vars: The names of the class variables defined on the model. private_attributes: Metadata about the private attributes of the model. signature: The synthesized __init__ [Signature][inspect.Signature] of the model.
__pydantic_complete__: Whether model building is completed, or if there are still undefined fields.
__pydantic_core_schema__: The core schema of the model.
__pydantic_custom_init__: Whether the model has a custom `__init__` function.
__pydantic_decorators__: Metadata containing the decorators defined on the model.
    This replaces `Model.__validators__` and `Model.__root_validators__` from Pydantic V1.
__pydantic_generic_metadata__: Metadata for generic models; contains data used for a similar purpose to
    __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these.
__pydantic_parent_namespace__: Parent namespace of the model, used for automatic rebuilding of models.
__pydantic_post_init__: The name of the post-init method for the model, if defined.
__pydantic_root_model__: Whether the model is a [`RootModel`][pydantic.root_model.RootModel].
__pydantic_serializer__: The `pydantic-core` `SchemaSerializer` used to dump instances of the model.
__pydantic_validator__: The `pydantic-core` `SchemaValidator` used to validate instances of the model.
__pydantic_fields__: A dictionary of field names and their corresponding [`FieldInfo`][pydantic.fields.FieldInfo] objects.
__pydantic_computed_fields__: A dictionary of computed field names and their corresponding [`ComputedFieldInfo`][pydantic.fields.ComputedFieldInfo] objects.
__pydantic_extra__: A dictionary containing extra values, if [`extra`][pydantic.config.ConfigDict.extra]
    is set to `'allow'`.
__pydantic_fields_set__: The names of fields explicitly set during instantiation.
__pydantic_private__: Values of private attributes set on the model instance.*
Column
Column (id:str=<factory>, name:str, type:str, settings:Dict=<factory>, col_order:Optional[int]=None)
*!!! abstract “Usage Documentation” Models
A base class for creating Pydantic models.
Attributes: class_vars: The names of the class variables defined on the model. private_attributes: Metadata about the private attributes of the model. signature: The synthesized __init__ [Signature][inspect.Signature] of the model.
__pydantic_complete__: Whether model building is completed, or if there are still undefined fields.
__pydantic_core_schema__: The core schema of the model.
__pydantic_custom_init__: Whether the model has a custom `__init__` function.
__pydantic_decorators__: Metadata containing the decorators defined on the model.
    This replaces `Model.__validators__` and `Model.__root_validators__` from Pydantic V1.
__pydantic_generic_metadata__: Metadata for generic models; contains data used for a similar purpose to
    __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these.
__pydantic_parent_namespace__: Parent namespace of the model, used for automatic rebuilding of models.
__pydantic_post_init__: The name of the post-init method for the model, if defined.
__pydantic_root_model__: Whether the model is a [`RootModel`][pydantic.root_model.RootModel].
__pydantic_serializer__: The `pydantic-core` `SchemaSerializer` used to dump instances of the model.
__pydantic_validator__: The `pydantic-core` `SchemaValidator` used to validate instances of the model.
__pydantic_fields__: A dictionary of field names and their corresponding [`FieldInfo`][pydantic.fields.FieldInfo] objects.
__pydantic_computed_fields__: A dictionary of computed field names and their corresponding [`ComputedFieldInfo`][pydantic.fields.ComputedFieldInfo] objects.
__pydantic_extra__: A dictionary containing extra values, if [`extra`][pydantic.config.ConfigDict.extra]
    is set to `'allow'`.
__pydantic_fields_set__: The names of fields explicitly set during instantiation.
__pydantic_private__: Values of private attributes set on the model instance.*
RagasApiClient.create_dataset_with_data
RagasApiClient.create_dataset_with_data (project_id:str, name:str, description:str, columns:List[__main__.Column], rows:List[__main__.Row], batch_size:int=50)
*Create a dataset with columns and rows.
This method creates a dataset and populates it with columns and rows in an optimized way using concurrent requests.
Args: project_id: Project ID name: Dataset name description: Dataset description columns: List of column definitions rows: List of row data batch_size: Number of operations to perform concurrently
Returns: The created dataset*
Now lets test this.
# Create Column objects
column_objects = []
for col in test_data_columns:
    column_objects.append(Column(
        name=col["name"],
        type=col["type"]
        # id and settings will be auto-generated
    ))
# Create a mapping of column names to their IDs for creating rows
column_map = {col.name: col.id for col in column_objects}
# Create Row objects
row_objects = []
for row in test_data_rows:
    cells = []
    for key, value in row.items():
        if key in column_map:  # Skip any extra fields not in columns
            cells.append(RowCell(
                data=value,
                column_id=column_map[key]
            ))
    row_objects.append(Row(data=cells))
# Now we can create the dataset
dataset = await client.create_dataset_with_data(
    project_id=TEST_PROJECT_ID,
    name="Capitals Dataset",
    description="A dataset about capital cities",
    columns=column_objects,
    rows=row_objects
)
print(f"Created dataset with ID: {dataset['id']}")
# Verify the data
columns = await client.list_dataset_columns(TEST_PROJECT_ID, dataset["id"])
print(f"Created {len(columns['items'])} columns")
rows = await client.list_dataset_rows(TEST_PROJECT_ID, dataset["id"])
print(f"Created {len(rows['items'])} rows")Created dataset with ID: 5e7912f4-6a65-4d0c-bf79-0fab9ddda40c
Created 4 columns
Created 3 rows
# get dataset url
url = f"{WEB_ENDPOINT}/dashboard/projects/{TEST_PROJECT_ID}/datasets/{dataset['id']}"
url'https://dev.app.ragas.io/dashboard/projects/e1b3f1e4-d344-48f4-a178-84e7e32e6ab6/datasets/5e7912f4-6a65-4d0c-bf79-0fab9ddda40c'
# cleanup
await client.delete_dataset(TEST_PROJECT_ID, dataset["id"])The same but for Experiments
RagasApiClient.delete_experiment_row
RagasApiClient.delete_experiment_row (project_id:str, experiment_id:str, row_id:str)
Delete a row from an experiment.
RagasApiClient.update_experiment_row
RagasApiClient.update_experiment_row (project_id:str, experiment_id:str, row_id:str, data:Dict)
Update an existing row in an experiment.
RagasApiClient.create_experiment_row
RagasApiClient.create_experiment_row (project_id:str, experiment_id:str, id:str, data:Dict)
Create a new row in an experiment.
RagasApiClient.get_experiment_row
RagasApiClient.get_experiment_row (project_id:str, experiment_id:str, row_id:str)
Get a specific row in an experiment.
RagasApiClient.list_experiment_rows
RagasApiClient.list_experiment_rows (project_id:str, experiment_id:str, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List rows in an experiment.
RagasApiClient.delete_experiment_column
RagasApiClient.delete_experiment_column (project_id:str, experiment_id:str, column_id:str)
Delete a column from an experiment.
RagasApiClient.update_experiment_column
RagasApiClient.update_experiment_column (project_id:str, experiment_id:str, column_id:str, **column_data)
Update an existing column in an experiment.
RagasApiClient.create_experiment_column
RagasApiClient.create_experiment_column (project_id:str, experiment_id:str, id:str, name:str, type:str, col_order:Optional[int]=None, settings:Optional[Dict]=None)
Create a new column in an experiment.
RagasApiClient.get_experiment_column
RagasApiClient.get_experiment_column (project_id:str, experiment_id:str, column_id:str)
Get a specific column in an experiment.
RagasApiClient.list_experiment_columns
RagasApiClient.list_experiment_columns (project_id:str, experiment_id:str, limit:int=50, offset:int=0, order_by:Optional[str]=None, sort_dir:Optional[str]=None)
List columns in an experiment.
await client.create_experiment(TEST_PROJECT_ID, "New Experiment", "This is a new experiment"){'id': '7c695b58-7fc3-464c-a18b-a96e35f9684d',
 'name': 'New Experiment',
 'description': 'This is a new experiment',
 'updated_at': '2025-04-09T17:03:44.340782+00:00',
 'created_at': '2025-04-09T17:03:44.340782+00:00',
 'version_counter': 0,
 'project_id': 'e1b3f1e4-d344-48f4-a178-84e7e32e6ab6'}
experiments = await client.list_experiments(TEST_PROJECT_ID)
EXPERIMENT_ID = experiments["items"][0]["id"]
EXPERIMENT_ID'78fd6c58-7edf-4239-93d1-4f49185d8e49'
RagasApiClient.create_experiment_with_data
RagasApiClient.create_experiment_with_data (project_id:str, name:str, description:str, columns:List [__main__.Column], rows:List[__main__.Row], batch_size:int=50)
*Create an experiment with columns and rows.
This method creates an experiment and populates it with columns and rows in an optimized way using concurrent requests.
Args: project_id: Project ID name: Experiment name description: Experiment description columns: List of column definitions rows: List of row data batch_size: Number of operations to perform concurrently
Returns: The created experiment*
RagasApiClient.convert_raw_data
RagasApiClient.convert_raw_data (column_defs:List[Dict], row_data:List[Dict])
*Convert raw data to column and row objects.
Args: column_defs: List of column definitions (dicts with name, type) row_data: List of dictionaries with row data
Returns: Tuple of (columns, rows)*
RagasApiClient.create_column_map
RagasApiClient.create_column_map (columns:List[__main__.Column])
*Create a mapping of column names to IDs.
Args: columns: List of column objects
Returns: Dictionary mapping column names to IDs*
RagasApiClient.create_row
RagasApiClient.create_row (data:Dict[str,Any], column_map:Dict[str,str], id:Optional[str]=None)
*Create a Row object from a dictionary.
Args: data: Dictionary mapping column names to values column_map: Dictionary mapping column names to column IDs id: Custom ID (generates one if not provided)
Returns: Row object*
RagasApiClient.create_column
RagasApiClient.create_column (name:str, type:str, settings:Optional[Dict]=None, col_order:Optional[int]=None, id:Optional[str]=None)
*Create a Column object.
Args: name: Column name type: Column type (use ColumnType enum) settings: Column settings col_order: Column order id: Custom ID (generates one if not provided)
Returns: Column object*