
Security News
libxml2 Maintainer Ends Embargoed Vulnerability Reports, Citing Unsustainable Burden
Libxml2’s solo maintainer drops embargoed security fixes, highlighting the burden on unpaid volunteers who keep critical open source software secure.
This package helps to replace accented characters with their corresponding non-accented ascii characters
Often, we encounter data that includes special characters with accents or diacritical marks collectively referred to as diacritics. When working with this data, there's often a need to substitute these accented characters with their equivalent non-accented ASCII counterparts.
The exciting news is that this Python package simplifies the process of replacing accented characters with their non-accented ASCII equivalents.
This Python package is compatible with both standard Python and can seamlessly integrate with Pyspark and Spark SQL for your data processing needs.
The package can be installed from the PyPi repository using the below command
pip install replace_accents
Let's delve into detailed examples
1. Python example
# Import the replace accents function
from replace_accents import replace_accents_characters
# Use the function to replace accent characters
replace_accents_characters("crème de la crème")
2. Pyspark example
# Import the replace accents function
from replace_accents import replace_accents_characters
# Import Pyspark col function
from pyspark.sql.functions import col
# Register python function as Pyspark UDF and Spark SQL UDF
replace_accents_characters_pyspark_udf = spark.udf.register("replace_accents_characters_sparksql_udf", replace_accents_characters)
# Create Pyspark Dataframe
df = spark.table("table_name")
# Use Pyspark UDF on the Pyspark dataframe
display(df.select("col1", replace_accents_characters_pyspark_udf(col("col1")).alias("replaced_col1")))
3. Spark SQL example
# Import the replace accents function
from replace_accents import replace_accents_characters
# Register python function as Pyspark UDF and Spark SQL UDF
replace_accents_characters_pyspark_udf = spark.udf.register("replace_accents_characters_sparksql_udf", replace_accents_characters)
# Use Spark SQL UDF in the SQL query
spark.sql("select col1, replace_accents_characters_sparksql_udf(col1) as replaced_col1 from table")
You can get more information about this package here
FAQs
This package helps to replace accented characters with their corresponding non-accented ascii characters
We found that replace-accents demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Libxml2’s solo maintainer drops embargoed security fixes, highlighting the burden on unpaid volunteers who keep critical open source software secure.
Research
Security News
Socket investigates hidden protestware in npm packages that blocks user interaction and plays the Ukrainian anthem for Russian-language visitors.
Research
Security News
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.