Skip to main content

First Steps: QuickStart Guide

Karim JOUINI avatar
Written by Karim JOUINI
Updated over a month ago

ThunderCode – Complete Onboarding Guide

A step-by-step walkthrough from creating your first project to generating test cases with AI.

🧭 Part 1: Creating a New Project

🔹 Step 1.1 – Access the Test Projects Dashboard

  • After logging in, you land on the Test Projects page.

  • Each card shows:

    • Project name

    • Screenshot (optional)

    • Website URL

    • Test Cases, Test Suites, and Test Runs count

  • To create a new project, click “New Project”.


🔹 Step 1.2 – Choose the Project Type

  • You’ll see 3 options:

    • Web ProjectAvailable: Enter a URL to test a live site.

    • 📱 Native Mobile ProjectComing Soon

    • ⚙️ API ProjectComing Soon

  • Click on Web Project to proceed.


🔹 Step 1.3 – Enter the Website URL

  • On the Web Project Setup page, type your website’s full URL.

    Example: https://thundercode.ai

  • Click Next.

🧠 What happens next:

  • ThunderCode automatically analyzes the website.

  • It scrapes the structure and identifies key UI elements to assist in test generation.


📋 Part 2: Project Configuration

🔹 Step 2.1 – Project Overview

Fill in the following details:

  • Project Name – e.g. “Login Tests”

  • Target Market – B2C, B2B SMB, B2B Enterprise, or Gov

  • Target Users – Add roles like “guest,” “admin,” or “premium user”

  • Screenshot (optional) – Upload a PNG/JPG (max 3MB)

Click Next.


🔹 Step 2.2 – Scope & Specifications

Define the project’s purpose:

  • Testing Objectives – e.g., "Verify all login flows"

  • Key Features – e.g., "Cart, filters, login, payment"

  • Integrations – e.g., "Stripe, Firebase, Algolia"

  • Testing Methodology – Tick one or more:

    • Agile

    • Manual

    • CI

    • TDD / BDD

Click Next.


🔹 Step 2.3 – Define Testing Standards

Choose standards relevant to your product:

  • Quality Standards – e.g., ISO 25010

  • Security Standards – e.g., GDPR, ISO 27001

  • Accessibility Standards – e.g., WCAG 2.1

  • AI Standards – e.g., fairness, explainability

  • Other Standards – Add your own if needed

Click Next.


🔹 Step 2.4 – Tooling & Tech Stack

Give ThunderCode context about your stack:

  • Framework – React, Angular, Vue, Django…

  • Hosting Platform – Vercel, Heroku, AWS…

  • Version Control – GitHub, GitLab, Bitbucket…

Click Submit.


🧪 Part 3: Creating Your First Test Case

🔹 Step 3.1 – Test Project Page

After submitting, you land on your Project Dashboard:

  • Tabs for:

    • Test Cases

    • Test Runs

    • Environments

  • Click “Add a Test Case.”


🔹 Step 3.2 – Describe the Test Case

  • You’re taken to a page titled “Describe your test case and click on Generate Test Steps.”

  • Input a simple sentence like:

    “Check that the homepage is in French.”

  • Click “Generate Test Steps.”


🤖 Part 4: Test Steps Creation

🔹 Step 4.1 – Review the Generated Steps

ThunderCode generates a list of clear, structured test steps based on your description:

Each step is:

  • Written in clear, actionable language

  • Automatically numbered and structured in execution order

You can:

  • Edit steps

  • Reorder them

  • Delete unwanted steps


🔹 Step 4.2 – Use the AI Chat

  • On the right side, you’ll see a chat panel.

  • You can:

    • Ask the AI to clarify steps

    • Request changes

    • Request modifications (e.g., add, remove, reorder steps)

    • Add extra details

Example:



🧪 Part 5: Run and Analyze Results

🔹 Step 5.1 – Run the Test

Click Execute to start the test.

ThunderCode opens a live browser on the right and begins running each step in real time.

  • Each step is highlighted as it runs.

  • You can see exactly what the AI agent is doing: visiting pages, clicking buttons, or filling inputs.


🔹 Step 5.2 – Assertions and Checks

At the end of the test (or during it), ThunderCode performs assertions to check if the expected elements or content are present.

Example:

“Assert that the homepage content is displayed in French.”

Each assertion result is classified by severity:

  • 🔴 Critical – Blocking issue.

    Major functionality is broken, and the test cannot continue.

    → The test stops and is marked as failed.

  • 🟠 High – Important element is missing.

    The test completes, but the outcome is unreliable.

    → The test is marked as failed.

  • 🟡 Medium – Minor issue (e.g., UI inconsistency).

    → The test continues and is marked as passed with warning.

  • 🔵 Low – Informational or cosmetic (e.g., small layout shift).

    → The test passes but logs the issue as a notice.

For any failure, ThunderCode provides:

  • A screenshot at the exact moment of the issue

  • An AI-generated explanation of what went wrong


😉 Part 6: Bonus Tips

  • ✍️ Write clear, specific prompts

    Use short, direct sentences that describe exactly what you want to test.

  • 🔖 Use exact button and page names

    Refer to elements as they appear in the UI.

    ✅ “Click ‘Change Password’”

    ❌ “Reset your credentials”

  • 🚫 Avoid vague terms

    Don’t use words like “navigate” or “interact.”

    Prefer:

    “Click,” “Go to,” “Select,” “Type”

  • ✂️ Keep instructions short

    Break long prompts into smaller chunks.

    This helps the AI generate better and more accurate test steps.

  • 📦 Keep test cases focused

    One test = one goal.

    Avoid mixing multiple flows or features in a single test.

Did this answer your question?