:py:mod:`astronomer.providers.google.cloud.operators.dataproc` ============================================================== .. py:module:: astronomer.providers.google.cloud.operators.dataproc .. autoapi-nested-parse:: This module contains Google Dataproc operators. Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: astronomer.providers.google.cloud.operators.dataproc.DataprocCreateClusterOperatorAsync astronomer.providers.google.cloud.operators.dataproc.DataprocDeleteClusterOperatorAsync astronomer.providers.google.cloud.operators.dataproc.DataprocSubmitJobOperatorAsync astronomer.providers.google.cloud.operators.dataproc.DataprocUpdateClusterOperatorAsync .. py:class:: DataprocCreateClusterOperatorAsync(*, polling_interval = 5.0, **kwargs) Bases: :py:obj:`airflow.providers.google.cloud.operators.dataproc.DataprocCreateClusterOperator` Create a new cluster on Google Cloud Dataproc Asynchronously. :param project_id: The ID of the google cloud project in which to create the cluster. (templated) :param cluster_name: Name of the cluster to create :param labels: Labels that will be assigned to created cluster :param cluster_config: Required. The cluster config to create. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.dataproc_v1.types.ClusterConfig` :param virtual_cluster_config: Optional. The virtual cluster config, used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a `Dataproc-on-GKE cluster ` :param region: The specified region where the dataproc cluster is created. :param delete_on_error: If true the cluster will be deleted if created with ERROR state. Default value is true. :param use_if_exists: If true use existing cluster :param request_id: Optional. A unique id used to identify the request. If the server receives two ``DeleteClusterRequest`` requests with the same id, then the second request will be ignored and the first ``google.longrunning.Operation`` created and stored in the backend is returned. :param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be retried. :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. :param metadata: Additional metadata that is provided to the method. :param gcp_conn_id: The connection ID to use connecting to Google Cloud. :param impersonation_chain: Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated). :param polling_interval: Time in seconds to sleep between checks of cluster status .. py:method:: execute(context) Call create cluster API and defer to DataprocCreateClusterTrigger to check the status .. py:method:: execute_complete(context, event = None) Callback for when the trigger fires - returns immediately. Relies on trigger to throw an exception, otherwise it assumes execution was successful. .. py:class:: DataprocDeleteClusterOperatorAsync(*, polling_interval = 5.0, **kwargs) Bases: :py:obj:`airflow.providers.google.cloud.operators.dataproc.DataprocDeleteClusterOperator` Delete a cluster on Google Cloud Dataproc Asynchronously. :param region: Required. The Cloud Dataproc region in which to handle the request (templated). :param cluster_name: Required. The cluster name (templated). :param project_id: Optional. The ID of the Google Cloud project that the cluster belongs to (templated). :param cluster_uuid: Optional. Specifying the ``cluster_uuid`` means the RPC should fail if cluster with specified UUID does not exist. :param request_id: Optional. A unique id used to identify the request. If the server receives two ``DeleteClusterRequest`` requests with the same id, then the second request will be ignored and the first ``google.longrunning.Operation`` created and stored in the backend is returned. :param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be retried. :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. :param metadata: Additional metadata that is provided to the method. :param gcp_conn_id: The connection ID to use connecting to Google Cloud. :param impersonation_chain: Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated). :param polling_interval: Time in seconds to sleep between checks of cluster status .. py:method:: execute(context) Call delete cluster API and defer to wait for cluster to completely deleted .. py:method:: execute_complete(context, event = None) Callback for when the trigger fires - returns immediately. Relies on trigger to throw an exception, otherwise it assumes execution was successful. .. py:class:: DataprocSubmitJobOperatorAsync(*, job, region, project_id = None, request_id = None, retry = DEFAULT, timeout = None, metadata = (), gcp_conn_id = 'google_cloud_default', impersonation_chain = None, asynchronous = False, deferrable = False, polling_interval_seconds = 10, cancel_on_kill = True, wait_timeout = None, **kwargs) Bases: :py:obj:`airflow.providers.google.cloud.operators.dataproc.DataprocSubmitJobOperator` Submits a job to a cluster and wait until is completely finished or any error occurs. :param project_id: Optional. The ID of the Google Cloud project that the job belongs to. :param region: Required. The Cloud Dataproc region in which to handle the request. :param job: Required. The job resource. If a dict is provided, it must be of the same form as the protobuf message class:`~google.cloud.dataproc_v1.types.Job` :param request_id: Optional. A unique id used to identify the request. If the server receives two ``SubmitJobRequest`` requests with the same id, then the second request will be ignored and the first ``Job`` created and stored in the backend is returned. It is recommended to always set this value to a UUID. :param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be retried. :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. :param metadata: Additional metadata that is provided to the method. :param gcp_conn_id: The connection ID to use connecting to Google Cloud Platform. :param impersonation_chain: Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated). :param cancel_on_kill: Flag which indicates whether cancel the hook's job or not, when on_kill is called .. py:method:: execute(context) Airflow runs this method on the worker and defers using the trigger. Submit the job and get the job_id using which we defer and poll in trigger .. py:method:: execute_complete(context, event = None) Callback for when the trigger fires - returns immediately. Relies on trigger to throw an exception, otherwise it assumes execution was successful. .. py:class:: DataprocUpdateClusterOperatorAsync(*, polling_interval = 5.0, **kwargs) Bases: :py:obj:`airflow.providers.google.cloud.operators.dataproc.DataprocUpdateClusterOperator` Updates an existing cluster in a Google cloud platform project. :param region: Required. The Cloud Dataproc region in which to handle the request. :param project_id: Optional. The ID of the Google Cloud project the cluster belongs to. :param cluster_name: Required. The cluster name. :param cluster: Required. The changes to the cluster. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.dataproc_v1.types.Cluster` :param update_mask: Required. Specifies the path, relative to ``Cluster``, of the field to update. For example, to change the number of workers in a cluster to 5, the ``update_mask`` parameter would be specified as ``config.worker_config.num_instances``, and the ``PATCH`` request body would specify the new value. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.protobuf.field_mask_pb2.FieldMask` :param graceful_decommission_timeout: Optional. Timeout for graceful YARN decommissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. :param request_id: Optional. A unique id used to identify the request. If the server receives two ``UpdateClusterRequest`` requests with the same id, then the second request will be ignored and the first ``google.longrunning.Operation`` created and stored in the backend is returned. :param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be retried. :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. :param metadata: Additional metadata that is provided to the method. :param gcp_conn_id: The connection ID to use connecting to Google Cloud. :param impersonation_chain: Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated). :param polling_interval: Time in seconds to sleep between checks of cluster status .. py:method:: execute(context) Call update cluster API and defer to wait for cluster update to complete .. py:method:: execute_complete(context, event = None) Callback for when the trigger fires - returns immediately. Relies on trigger to throw an exception, otherwise it assumes execution was successful.