File size: 1,465 Bytes
c2257a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
import pandas as pd
import numpy as np

# Load data
def load_data(file_path: str) -> pd.DataFrame:
    """
    Loads the dataset from a CSV file.
    
    Args:
    - file_path (str): Path to the dataset file.

    Returns:
    - pd.DataFrame: Loaded dataset.
    """
    return pd.read_csv(file_path)

# Clean data (e.g., handle missing values, remove duplicates)
def clean_data(df: pd.DataFrame) -> pd.DataFrame:
    """
    Cleans the dataset by removing duplicates and handling missing values.

    Args:
    - df (pd.DataFrame): The raw dataset.

    Returns:
    - pd.DataFrame: Cleaned dataset.
    """
    df = df.drop_duplicates()
    df = df.fillna(df.mean())  # Simple approach: fill missing values with column mean
    return df

# Normalize data (e.g., standard scaling)
def normalize_data(df: pd.DataFrame) -> pd.DataFrame:
    """
    Normalizes the dataset using standard scaling (z-score).

    Args:
    - df (pd.DataFrame): The cleaned dataset.

    Returns:
    - pd.DataFrame: Normalized dataset.
    """
    return (df - df.mean()) / df.std()

# Main function for preprocessing
def preprocess_data(file_path: str) -> pd.DataFrame:
    """
    Preprocesses the dataset from file by loading, cleaning, and normalizing it.

    Args:
    - file_path (str): Path to the dataset file.

    Returns:
    - pd.DataFrame: The preprocessed dataset.
    """
    df = load_data(file_path)
    df = clean_data(df)
    df = normalize_data(df)
    return df