python - Deploy Flask, Pyspark code in azure app service - Stack Overflow

时间: 2025-01-06 admin 业界

Below code which i was used to deploy the code for testing the azure app service working or not. But the code not working as expected, home_page was working but process_data is not working facing the 500 error. So my question is whether we can able to run the app service for the spark. Finally it is working in my local system but not in the azure for the spark.

spark = SparkSession.builder \
.appName("PySparkApp") \
.master("local[*]") \
.config("spark.driver.bindAddress", "0.0.0.0") \
.config("spark.ui.enabled", "false") \
.config("spark.python.worker.reuse", "true") \
.getOrCreate()
spark.sparkContext.setLogLevel("ERROR")  # Suppress non-critical logs

@app.route('/')
def home_page():
    return "Welcome To Home Page..."

@app.route('/process')
def process_data():
   # Create a DataFrame with some example data
   df = spark.createDataFrame([
   Row(id=1, rate=2.0, description='string1', date=date(2000, 1, 1),datetime=datetime(2000, 1, 1, 12, 0)),
    Row(id=2, rate=3.0, description='string2', date=date(2000, 2, 1), datetime=datetime(2000, 1, 2, 12, 0)),
    Row(id=4, rate=5.0, description='string3', date=date(2000, 3, 1), datetime=datetime(2000, 1, 3, 12, 0))
])

# Convert to JSON without pandas
data = df.rdd.map(lambda row: row.asDict()).collect()

# Return data as JSON
return jsonify(data)

I tried configure all the spark, java and hadoop but still no luck.