Skip to main content

Public Beta 1

We are thrilled to launch the public beta of Timeplus cloud release.

We will update the beta version from time to time and list key enhancements in this page.

(in year 2022)

Biweekly Update 12/26-1/6

  • When the local disk is 90% full, Timeplus will stop writing new data. This threshold is configurable.
  • In the Data Lineages page, sources are shown by default.
  • Applied unified UI look & feel in many pages.
  • Released Timeplus Python SDK 1.1.1 with a more friendly API for create/ingest/query streams.
  • (Experimental) you can now push data to Timeplus, from your local Kafka cluster with kafka-connect-timeplus, or from your local Pulsar cluster with pulsar-io-sink, or from an AirByte cluster with destination-timeplus connector. We also documented how Timeplus Cloud can pull data from your local data source via ngrok.

Biweekly Update 12/12-12/23

  • With the recent enhancements of the Ingest API, in many cases, you can configure other systems to push data directly to Timeplus via webhook, without writing code.

  • Now you can set descriptions while creating/updating streams or views.

  • You can edit a view to change its SQL query, without having to delete then recreate it.

  • For materialized views, you can see their row count and disk size. The retention policy can be updated too.

  • In the query page, you can filter results with keyword, without changing the SQL.

  • In the query page, we removed the pagination. The latest results are shown at the bottom.

  • For columns with long text or JSON value, in the previous version, the content can be trimmed in the query result. Now you can click on the row to show them in a side panel.

Biweekly Update 11/28-12/9

  • We added an experimental feature to create User-Defined-Aggregation-Function (UDAF) via JavaScript. You can create highly customized logic with JavaScript, even for Complex Event Processing (CEP) scenarios. Contact us if you want to try this feature.
  • Refined the documentation for the Python SDK
  • Source
    • In the source management page, we added a sparkline to show the throughput for each source. This sparkline auto-refreshes every 15 seconds.
    • When you create a new source and choose to send data to an existing stream, only the streams with matching schema will be shown. If no existing streams match, you have to create a new stream.
    • In the preview step, the first 3 rows will be fetched from the source. If Timeplus cannot detect the column data type automatically, the column type will be set as unknown. This could happen if the value in those 3 events contain null. Please check with your data source provider. If you are sure the future events will be in a certain data type, such as string, you can change the column type and choose to create a new stream to receive data from the source.
  • When you create a new materialized view, you can set a retention policy, specifying the max size or max age for the data in the materialized view.
  • Clicking on a recent query on the homepage will now open the query page, instead of showing the query history.
  • We removed the purple page description banners formerly at the top of each page. If there is no object defined in a certain page, a customized help message is shown.
  • You can click-and-drag to resize column width in the streaming table (query page).
  • An experimental alert manager UI is added. Please check our user guide.

Biweekly Update 11/14-11/25

  • Source, sink, API and SDK
    • We built and open-sourced the Pulsar Sink Connector for Timeplus. You can install this connector in your Pulsar cluster and push real-time data to Timeplus.
    • We released a major upgrade of the Python SDK The code and documentations are automatically generated by Swagger Codegen , so that it will be always aligned with our latest REST API. Please note this is not compatible with previous 0.2.x or 0.3.x SDK. If you are using those SDKs, please plan your migration. New API is only available in the 1.x SDK.
    • We further enhanced our ingest REST API to support more systems, such as vector and auth0. If you would like to leverage such a 3rd party system/tool to push data to Timeplus but it doesn't allow custom content type, then you can use the standard application/json content type, and send POST request to /api/v1beta1/streams/$STREAM_NAME/ingest?format=streaming. This will ensure the Timeplus API server treats the POST data as NDJSON. For the API authentication, besides the custom HTTP Header X-Api-Key:THE_KEY, we now also support Authorization: ApiKey THE_KEY Learn more Ingest API
  • UI improvements
    • In the signup/login page, we added the WeChat integration. You can scan the QR code with your phone and sign up or log in.
    • When a query is finished, canceled, or paused, you can download the current results as a CSV. This is helpful when there are multiple pages of results.
    • When you click an entity on a Data Lineages page, such as a stream or a view, a summary is now shown in the side panel, instead of a pop-up, allowing you to see more detailed information.
    • We added an experimental UI for the alert manager. Want to be the first to try this feature? Get in touch with us!

Biweekly Update 10/31-11/11

  • Streaming engine

    • A new LIMIT <n> BY <column> syntax is introduced. Combining with emit_version() function, you can show a limited number of results per emit. e.g.

      SELECT cid,avg(speed_kmh) AS avgspeed, emit_version() 
      FROM tumble(car_live_data,5s) GROUP BY window_start,cid
      LIMIT 3 BY emit_version()
    • (Experimental) able to set retention policy for materialized views, to limit the number of rows or total storage for each materialized view. UI will be available soon.

  • Source, sink, API and SDK

    • Enhanced Ingest REST API to support "Newline Delimited JSON" (ndjson).
    • Refined the REST API doc, to show APIs in the different versions.
    • New version of datapm CLI with the enhanced Timeplus sink. Set the workspace baseURL to push data to Timeplus, supporting both cloud and on-prem Timeplus.
  • UI improvements

    • We added a guiding system for new users to quickly get started with Timeplus.
    • Data lineage page is enhanced with visual refresh.
    • (Experimental) when a Streaming SQL is running, the column headers show the value for the recent 10 rows. When the SQL is paused or canceled, the column headers show the infograph for all cached results, with a line for the average value.
    • (Experimental) localized user interface for China market.

Biweekly Update 10/17-10/28

  • Streaming engine

    • We simplified the session time window: if you want to create sub-streams, you no longer need to set the keyBy column as one parameter of the session window. Just use SELECT .. FROM session(..) PARTITION BY keyBy . Other time window functions(tumble and hop) support the PARTITION BY in the same way.

    • The other enhancement of the session time window: we introduced an intuitive way to express whether the events for startCondtion or endCondition should be included in the session window. Four combinations supported: [startCondition, endCondition], (startCondtion, endCondition), [startCondition,endCondition),(startCondition,endCondition]

    • We added the support of <agg> FILTER(WHERE ..) as a shortcut to run aggregation for data with certain condition, e.g.

      select count() filter(where action='add') as cnt_action_add,
      count() filter(where action='cancel') as cnt_action_cancel
      from table(bookings)
    • Significantly reduced the memory consumption.

  • Source, sink, API and SDK

    • For Kafka source, if the authentication method is set to "None", the "Disable TLS" will be turned on automatically.
    • Enhanced the go-client open-source repo to support low level ingestion API.
    • An experimental JDBC driver is open-sourced. You can use this driver in some clients(e.g. DataGrip) to run read-only queries(support both streaming and historical queries)
  • UI improvements

    • Introduced the brand-new "Query Side Panel". You can expand it to explore many features, such as query snippets, SQL functions, bookmarks and history.
    • Bar chart is back. You need to add GROUP BY in the query. Choose “Viewing latest data” and select the column for “Group by”.
    • More information is shown in the "Data Lineage" page when you move the mouse over the entities. For example you can see the data schema for the streams, and the query behind the views.
    • Greatly improved the user experience of query tabs and bookmarks. You can easily set meaningful names for each query tab. When the query editor is not empty, click the bookmark icon to save this SQL for future use. Rename or delete the bookmarks in the query side panel.
    • Column names and types are shown for views in the "Stream Catalog".

Biweekly Update 10/3-10/14

  • Streaming engine
    • Enhanced the sub-stream to support stream level PARTITION BY, e.g. SELECT cid,speed_kmh,lag(longitude) as last_long,lag(latitude) as last_lat FROM car_live_data partition by cid Previously you have to add partition by cid for each aggregation function.
  • UI improvements
    • Single value visualization is enhanced, allowing you to turn on a sparkline to show the data change.
    • In sources and sinks pages, the throughput for each item is now shown in the list.
    • When you click the ? icon, we will show you the relevant help message for the current page, as well as the version information.
    • For new users, we also show a short description for what the page is about, as a closable information box.