Skip to content

Commit 180d704

Browse files
committed
Fixes formatting for with_columns docs page
1 parent e92f3f8 commit 180d704

File tree

2 files changed

+13
-8
lines changed

2 files changed

+13
-8
lines changed

docs/reference/decorators/with_columns.rst

+8-3
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,18 @@
22
with_columns
33
=======================
44

5-
** Overview **
5+
6+
--------
7+
Overview
8+
--------
69

710
This is part of the hamilton pyspark integration. To install, run:
811

9-
`pip install sf-hamilton[pyspark]`
12+
``pip install sf-hamilton[pyspark]``
1013

11-
**Reference Documentation**
14+
-----------------------
15+
Reference Documentation
16+
-----------------------
1217

1318
.. autoclass:: hamilton.plugins.h_spark.with_columns
1419
:special-members: __init__

hamilton/plugins/h_spark.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ class SparkKoalasGraphAdapter(base.HamiltonGraphAdapter, base.ResultMixin):
4141
using the \
4242
`Pandas API on Spark <https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html>`__
4343
44-
Use `pip install sf-hamilton[spark]` to get the dependencies required to run this.
44+
Use ``pip install sf-hamilton[spark]`` to get the dependencies required to run this.
4545
4646
Currently, this class assumes you're running SPARK 3.2+. You'd generally use this if you have an existing spark \
4747
cluster running in your workplace, and you want to scale to very large data set sizes.
@@ -759,7 +759,7 @@ def transform_node(
759759
760760
Note that, at this point, we don't actually know which columns will come from the
761761
base dataframe, and which will come from the upstream nodes. This is handled in the
762-
`with_columns` decorator, so for now, we need to give it enough information to topologically
762+
``with_columns`` decorator, so for now, we need to give it enough information to topologically
763763
sort/assign dependencies.
764764
765765
:param node_: Node to transform
@@ -950,10 +950,10 @@ def __init__(
950950
"""Initializes a with_columns decorator for spark. This allows you to efficiently run
951951
groups of map operations on a dataframe, represented as pandas/primitives UDFs. This
952952
effectively "linearizes" compute -- meaning that a DAG of map operations can be run
953-
as a set of `.withColumn` operations on a single dataframe -- ensuring that you don't have
954-
to do a complex `extract` then `join` process on spark, which can be inefficient.
953+
as a set of ``.withColumn`` operations on a single dataframe -- ensuring that you don't have
954+
to do a complex ``extract`` then ``join`` process on spark, which can be inefficient.
955955
956-
Here's an example of calling it -- if you've seen `@subdag`, you should be familiar with
956+
Here's an example of calling it -- if you've seen :py:class:`@subdag <hamilton.function_modifiers.recursive>`, you should be familiar with
957957
the concepts:
958958
959959
.. code-block:: python

0 commit comments

Comments
 (0)