Jelou - Notice history

All systems operational

Notice history

Dec 2025

Error Messages in Executions
  • Postmortem
    Postmortem

    RCA – 12/17/25

    1. Incident Summary

    On December 17, 2025, alerts were triggered related to the performance and availability of services dependent on the MongoDB database. Jelou’s technical team prioritized the platform review and confirmed that certain query processes were experiencing performance degradation, impacting the proper operation of associated services.

    2. Impact

    During the incident, customers interacting with chats and integrated corporate services experienced slow responses and intermittent behavior, including increased wait times on some queries, partially affecting the overall user experience.

    3. Detection

    The incident was detected through performance monitoring systems and automated alerts, which indicated an abnormal increase in database response times. Additionally, support tickets were received from the customer support team reporting slowness and service inconsistencies.

    4. Response

    Jelou’s technical team conducted a thorough analysis of the database status, reviewing performance metrics, query load, and resource utilization. During this process, inefficiencies in query execution were identified, causing overload and negatively impacting system responsiveness.

    5. Root Cause

    The investigation determined that the incident was related to an issue in the database query optimization structure, resulting in certain operations executing inefficiently, increasing processing times and resource consumption.

    6. Resolution

    To resolve the incident, corrective adjustments were applied to the database configuration and optimization, restoring the necessary conditions for proper query processing. These actions normalized performance and restored stability to the affected services.

    7. Mitigation

    As a preventive measure, change management and validation controls for the database were reinforced, along with continuous monitoring of critical performance metrics. In addition, early warning alert mechanisms remain active to proactively detect any anomalous behavior that could impact Jelou’s services.

  • Resolved
    Resolved
    This incident has been resolved.
  • Investigating
    Investigating

    Some error messages are currently appearing in certain chat executions.
    Our technical team is reviewing this behavior and making the necessary adjustments to ensure proper service operation.

Nov 2025

Global Cloudflare Connectivity Outage
  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    Cloudfare implemented a fix and are currently monitoring the result.

  • Identified
    Identified

    Issue: Cloudflare is experiencing a global connectivity outage (7:10 am UTC-05).
    Impact: You may have trouble accessing apps.jelou.ai.
    Jelou Status: All Jelou services remain operational, but external access is affected.
    Temporary Solution: Please use our contingency URL to log in:
    https://apps.01lab.co/login
    Updates: Follow live updates here: https://status.jelou.ai
    Source: Cloudflare incident details: https://www.cloudflarestatus.com

Oct 2025

Issue with Page Loading
  • Postmortem
    Postmortem

    RCA

    Incident Summary

    On October 30, 2025, between 15:50 and 18:39, a visual incident occurred on the platform during the deployment of the service hosted on Cloudflare.
    During that period, some static frontend assets were cached with an inconsistent version, causing visual errors in the application interface.

    The backend, APIs, and chatbots continued to operate normally, so the overall functionality and availability of the system were not affected.


    Impact

    The incident only affected the visual presentation of the platform, with no impact on service operations or user communication.
    Some users may have experienced inconsistencies in styles or interface appearance.
    The technical team immediately applied corrective actions by performing a cache purge and redeployment, restoring the normal visual display.


    Detection

    The issue was detected internally through technical team monitoring and reports of visual anomalies in the system.
    It was confirmed that the cached resource versions did not match the latest release, which caused the visual inconsistencies.


    Response

    Once the root cause was identified, the technical team performed a manual cache purge in Cloudflare and redeployed the frontend.
    This action regenerated the visual assets and restored the correct system appearance without impacting functionality.


    Root Cause

    The incident originated from an inconsistent version of static frontend assets generated during deployment, which remained temporarily cached and served to some users.
    This affected only the visual layer of the system and did not impact operations.


    Resolution and Preventive Actions

    • A cache purge was performed and the frontend was redeployed, restoring the correct visual styles.

    • An automatic integrity verification process for assets will be implemented after each deployment.

    • Frontend monitoring will be strengthened to detect cache-related visual errors earlier.

  • Resolved
    Resolved

    This incident has been resolved.

    We will post the RCA as an later update.

  • Monitoring
    Monitoring
    We implemented a fix and are currently monitoring the result.
  • Investigating
    Investigating

    We’re currently investigating this issue. Please avoid refreshing the page while we work on a fix.

Oct 2025 to Dec 2025

Next