:py:mod:`astronomer.providers.google.cloud.triggers.bigquery` ============================================================= .. py:module:: astronomer.providers.google.cloud.triggers.bigquery Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: astronomer.providers.google.cloud.triggers.bigquery.BigQueryInsertJobTrigger astronomer.providers.google.cloud.triggers.bigquery.BigQueryCheckTrigger astronomer.providers.google.cloud.triggers.bigquery.BigQueryGetDataTrigger astronomer.providers.google.cloud.triggers.bigquery.BigQueryIntervalCheckTrigger astronomer.providers.google.cloud.triggers.bigquery.BigQueryValueCheckTrigger astronomer.providers.google.cloud.triggers.bigquery.BigQueryTableExistenceTrigger .. py:class:: BigQueryInsertJobTrigger(conn_id, job_id, project_id, dataset_id = None, table_id = None, delegate_to = None, impersonation_chain = None, poll_interval = 4.0) Bases: :py:obj:`airflow.triggers.base.BaseTrigger` BigQueryInsertJobTrigger run on the trigger worker to perform insert operation :param conn_id: Reference to google cloud connection id :param job_id: The ID of the job. It will be suffixed with hash of job configuration :param project_id: Google Cloud Project where the job is running :param dataset_id: The dataset ID of the requested table. (templated) :param table_id: The table ID of the requested table. (templated) :param delegate_to: This performs a task on one host with reference to other hosts. :param impersonation_chain: This is the optional service account to impersonate using short term credentials. :param poll_interval: polling period in seconds to check for the status .. py:method:: serialize() Serializes BigQueryInsertJobTrigger arguments and classpath. .. py:method:: run() :async: Gets current job execution status and yields a TriggerEvent .. py:class:: BigQueryCheckTrigger(conn_id, job_id, project_id, dataset_id = None, table_id = None, delegate_to = None, impersonation_chain = None, poll_interval = 4.0) Bases: :py:obj:`BigQueryInsertJobTrigger` BigQueryCheckTrigger run on the trigger worker .. py:method:: serialize() Serializes BigQueryCheckTrigger arguments and classpath. .. py:method:: run() :async: Gets current job execution status and yields a TriggerEvent .. py:class:: BigQueryGetDataTrigger(conn_id, job_id, project_id, dataset_id = None, table_id = None, delegate_to = None, impersonation_chain = None, poll_interval = 4.0) Bases: :py:obj:`BigQueryInsertJobTrigger` BigQueryGetDataTrigger run on the trigger worker, inherits from BigQueryInsertJobTrigger class .. py:method:: serialize() Serializes BigQueryInsertJobTrigger arguments and classpath. .. py:method:: run() :async: Gets current job execution status and yields a TriggerEvent with response data .. py:class:: BigQueryIntervalCheckTrigger(conn_id, first_job_id, second_job_id, project_id, table, metrics_thresholds, date_filter_column = 'ds', days_back = -7, ratio_formula = 'max_over_min', ignore_zero = True, dataset_id = None, table_id = None, impersonation_chain = None, poll_interval = 4.0) Bases: :py:obj:`BigQueryInsertJobTrigger` BigQueryIntervalCheckTrigger run on the trigger worker, inherits from BigQueryInsertJobTrigger class :param conn_id: Reference to google cloud connection id :param first_job_id: The ID of the job 1 performed :param second_job_id: The ID of the job 2 performed :param project_id: Google Cloud Project where the job is running :param dataset_id: The dataset ID of the requested table. (templated) :param table: table name :param metrics_thresholds: dictionary of ratios indexed by metrics :param date_filter_column: column name :param days_back: number of days between ds and the ds we want to check against :param ratio_formula: ration formula :param ignore_zero: boolean value to consider zero or not :param table_id: The table ID of the requested table. (templated) :param impersonation_chain: This is the optional service account to impersonate using short term credentials. :param poll_interval: polling period in seconds to check for the status .. py:method:: serialize() Serializes BigQueryCheckTrigger arguments and classpath. .. py:method:: run() :async: Gets current job execution status and yields a TriggerEvent .. py:class:: BigQueryValueCheckTrigger(conn_id, sql, pass_value, job_id, project_id, tolerance = None, dataset_id = None, table_id = None, impersonation_chain = None, poll_interval = 4.0) Bases: :py:obj:`BigQueryInsertJobTrigger` BigQueryValueCheckTrigger run on the trigger worker, inherits from BigQueryInsertJobTrigger class :param conn_id: Reference to google cloud connection id :param sql: the sql to be executed :param pass_value: pass value :param job_id: The ID of the job :param project_id: Google Cloud Project where the job is running :param tolerance: certain metrics for tolerance :param dataset_id: The dataset ID of the requested table. (templated) :param table_id: The table ID of the requested table. (templated) :param impersonation_chain: This is the optional service account to impersonate using short term credentials. :param poll_interval: polling period in seconds to check for the status .. py:method:: serialize() Serializes BigQueryValueCheckTrigger arguments and classpath. .. py:method:: run() :async: Gets current job execution status and yields a TriggerEvent .. py:class:: BigQueryTableExistenceTrigger(project_id, dataset_id, table_id, gcp_conn_id, hook_params, poke_interval = 4.0) Bases: :py:obj:`airflow.triggers.base.BaseTrigger` Initialise the BigQuery Table Existence Trigger with needed parameters :param project_id: Google Cloud Project where the job is running :param dataset_id: The dataset ID of the requested table. :param table_id: The table ID of the requested table. :param gcp_conn_id: Reference to google cloud connection id :param hook_params: params for hook :param poke_interval: polling period in seconds to check for the status .. py:method:: serialize() Serializes BigQueryTableExistenceTrigger arguments and classpath. .. py:method:: run() :async: Will run until the table exists in the Google Big Query.