:py:mod:`astronomer.providers.core.sensors.external_task` ========================================================= .. py:module:: astronomer.providers.core.sensors.external_task Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: astronomer.providers.core.sensors.external_task.ExternalTaskSensorAsync astronomer.providers.core.sensors.external_task.ExternalDeploymentTaskSensorAsync .. py:class:: ExternalTaskSensorAsync(*, external_dag_id, external_task_id = None, external_task_ids = None, external_task_group_id = None, allowed_states = None, failed_states = None, execution_delta = None, execution_date_fn = None, check_existence = False, **kwargs) Bases: :py:obj:`airflow.sensors.external_task.ExternalTaskSensor` Waits for a different DAG, task group, or task to complete for a specific logical date. If both `external_task_group_id` and `external_task_id` are ``None`` (default), the sensor waits for the DAG. Values for `external_task_group_id` and `external_task_id` can't be set at the same time. By default, the ExternalTaskSensor will wait for the external task to succeed, at which point it will also succeed. However, by default it will *not* fail if the external task fails, but will continue to check the status until the sensor times out (thus giving you time to retry the external task without also having to clear the sensor). It is possible to alter the default behavior by setting states which cause the sensor to fail, e.g. by setting ``allowed_states=[State.FAILED]`` and ``failed_states=[State.SUCCESS]`` you will flip the behaviour to get a sensor which goes green when the external task *fails* and immediately goes red if the external task *succeeds*! Note that ``soft_fail`` is respected when examining the failed_states. Thus if the external task enters a failed state and ``soft_fail == True`` the sensor will _skip_ rather than fail. As a result, setting ``soft_fail=True`` and ``failed_states=[State.SKIPPED]`` will result in the sensor skipping if the external task skips. :param external_dag_id: The dag_id that contains the task you want to wait for :param external_task_id: The task_id that contains the task you want to wait for. :param external_task_ids: The list of task_ids that you want to wait for. If ``None`` (default value) the sensor waits for the DAG. Either external_task_id or external_task_ids can be passed to ExternalTaskSensor, but not both. :param allowed_states: Iterable of allowed states, default is ``['success']`` :param failed_states: Iterable of failed or dis-allowed states, default is ``None`` :param execution_delta: time difference with the previous execution to look at, the default is the same logical date as the current task or DAG. For yesterday, use [positive!] datetime.timedelta(days=1). Either execution_delta or execution_date_fn can be passed to ExternalTaskSensor, but not both. :param execution_date_fn: function that receives the current execution's logical date as the first positional argument and optionally any number of keyword arguments available in the context dictionary, and returns the desired logical dates to query. Either execution_delta or execution_date_fn can be passed to ExternalTaskSensor, but not both. :param check_existence: Set to `True` to check if the external task exists (when external_task_id is not None) or check if the DAG to wait for exists (when external_task_id is None), and immediately cease waiting if the external task or DAG does not exist (default value: False). .. py:method:: execute(context) Correctly identify which trigger to execute, and defer execution as expected. .. py:method:: execute_complete(context, session, event = None) Verifies that there is a success status for each task via execution date. .. py:method:: get_execution_dates(context) Helper function to set execution dates depending on which context and/or internal fields are populated. .. py:class:: ExternalDeploymentTaskSensorAsync(*, endpoint, poll_interval = 5, **kwargs) Bases: :py:obj:`astronomer.providers.http.sensors.http.HttpSensorAsync` External deployment task sensor Make HTTP call and poll for the response state of externally deployed DAG task to complete. Inherits from HttpSensorAsync, the host should be external deployment url, header with access token .. seealso:: - `Retrieve an access token and Deployment URL `_ :param http_conn_id: The Connection ID to run the sensor against :param method: The HTTP request method to use :param endpoint: The relative part of the full url :param request_params: The parameters to be added to the GET url :param headers: The HTTP headers to be added to the GET request :param extra_options: Extra options for the 'requests' library, see the 'requests' documentation (options to modify timeout, ssl, etc.) :param tcp_keep_alive: Enable TCP Keep Alive for the connection. :param tcp_keep_alive_idle: The TCP Keep Alive Idle parameter (corresponds to ``socket.TCP_KEEPIDLE``). :param tcp_keep_alive_count: The TCP Keep Alive count parameter (corresponds to ``socket.TCP_KEEPCNT``) :param tcp_keep_alive_interval: The TCP Keep Alive interval parameter (corresponds to ``socket.TCP_KEEPINTVL``) :param poke_interval: Time in seconds that the job should wait in between each tries .. py:method:: execute(context) Defers trigger class to poll for state of the job run until it reaches a failure state or success state .. py:method:: execute_complete(context, event = None) Callback for when the trigger fires - returns immediately. Return true and log the response if state is not success state raise ValueError