Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
257 views
in Technique[技术] by (71.8m points)

apache spark - Pyspark dataframe: creating column based on other column values

I have a pyspark dataframe:

enter image description here

Now, I want to add a new column called "countryAndState", where, for example for the first row, the value would be "USA_CA". I have tried several approaches, the last one was the following:

df_2 = df.withColumn("countryAndState", '{}_{}'.format(df.country, df.state))

I have tried with "country" and "state" instead, or with simply country and state,and also using col() but nothing seems to work. Can anyone help me solve this?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You can't use Python format strings in Spark. Use concat instead:

import pyspark.sql.functions as F

df_2 = df.withColumn("countryAndState", F.concat(F.col('country'), F.lit('_'), F.col('state')))

or concat_ws, if you need to chain many columns together with a given separator:

import pyspark.sql.functions as F

df_2 = df.withColumn("countryAndState", F.concat_ws('_', F.col('country'), F.col('state')))

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...