:py:mod:`astronomer.providers.apache.hive.sensors.named_hive_partition` ======================================================================= .. py:module:: astronomer.providers.apache.hive.sensors.named_hive_partition Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: astronomer.providers.apache.hive.sensors.named_hive_partition.NamedHivePartitionSensorAsync .. py:class:: NamedHivePartitionSensorAsync(*, partition_names, metastore_conn_id = 'metastore_default', poke_interval = 60 * 3, hook = None, **kwargs) Bases: :py:obj:`airflow.providers.apache.hive.sensors.named_hive_partition.NamedHivePartitionSensor` Waits asynchronously for a set of partitions to show up in Hive. .. note:: HivePartitionSensorAsync uses impyla library instead of PyHive. The sync version of this sensor uses `PyHive `. Since we use `impyla `_ library, please set the connection to use the port ``10000`` instead of ``9083``. For ``auth_mechanism='GSSAPI'`` the ticket renewal happens through command ``airflow kerberos`` in `worker/trigger `_. You may also need to allow traffic from Airflow worker/Triggerer to the Hive instance, depending on where they are running. For example, you might consider adding an entry in the ``etc/hosts`` file present in the Airflow worker/Triggerer, which maps the EMR Master node Public IP Address to its Private DNS Name to allow the network traffic. The library version of hive and hadoop in ``Dockerfile`` should match the remote cluster where they are running. :param partition_names: List of fully qualified names of the partitions to wait for. A fully qualified name is of the form ``schema.table/pk1=pv1/pk2=pv2``, for example, default.users/ds=2016-01-01. :param metastore_conn_id: Metastore thrift service connection id. .. py:method:: execute(context) Submit a job to Hive and defer .. py:method:: execute_complete(context, event = None) Callback for when the trigger fires - returns immediately. Relies on trigger to throw an exception, otherwise it assumes execution was successful.