Skip to content

How to Collect Data Through Website Registration?

This document explains the complete workflow for regular users, from registering an account to completing data collection and exporting data. It is suitable for new users using the platform for the first time.


The complete user operation path is as follows:

  1. Register Account
  2. Log In
  3. Create Task
  4. Configure Target
  5. Scrape Data
  6. Preview Results
  7. Export Data

Visit the official website and click the [Go to Console] button in the top right corner to enter the registration page.


On the registration page, users need to fill in the following information:

  • Email address
  • Login password
  • Confirm password
  • Invitation code (optional)
  • CAPTCHA

After filling in the information, click [Submit Registration].

Registration is completed after verifying the email.


On the login page, users enter:

  • Username
  • Password

Click the [Login] button to enter the system.

The platform also supports Google and Github login without registration.


3.2 Enter the Console After Successful Login

Section titled “3.2 Enter the Console After Successful Login”

After successful login, users enter the main system console, where they can view task overviews, usage instructions, and other information.

In the left menu, click [Template] → Select the script you want to run (Template).


When creating a new task, users need to fill in the following:

  • Task Name (used to distinguish different tasks)

On the scraping parameter configuration page, users can configure:

  • Target page URL / URL patterns
  • Data fields to collect (e.g., title, content, timestamp, etc.)
  • Whether to enable pagination
  • Collection depth / quantity limits

Please configure these options according to the requirements of the selected data collection script.


Advanced users can configure as needed:

  • Concurrency Limits
  • Timeout Strategy
  • Exception Handling

Regular users can keep the default configuration.


After configuring parameters, click the [Start] button.

The system will validate the task, and upon passing validation, the task will enter the running state.

Users can view task status in the [Run] list:

  • Pending
  • Running
  • Completed
  • Failed
  • Canceled


After the task is completed, click [Run] → Run IDs on the right side of the task to enter the data results page.

On the results page, users can:

  • View the list of scraped data
  • View details of a single data entry
  • View run logs
  • View input parameter details
  • View run time, costs, and other information


In the top right corner of the results table, click the [Export] button.

The system supports the following export formats (depending on permissions):

  • CSV
  • JSON

After selecting the export format, click [Confirm Export].

The system will generate the export file and automatically download it upon completion.


What to do if the task keeps running?

You can check the task logs or contact the administrator.

What to do if the system shows an error code?

- Record the complete **error code and prompt message**.

- First try **refreshing the page or logging in again**.

- If the error persists, provide the error message to **technical support or the administrator** for quick problem resolution.

What to do if the task keeps running?

You can check the task logs or contact the administrator.

What to do if clicking operation buttons has no response?

- Try refreshing the page.

- Check if your browser version is too old; it is recommended to use the latest version of mainstream browsers like Chrome or Firefox.

- Disable browser plugins (especially ad blockers) and try again.