FAQ

Frequently asked questions by the community

Who "Strech" platform is for?

Developers, Data Engineers, Data Scientists, and Data Analysts - We want to make your data-related tasks much more efficient, joyful, and reliable.

Developers can replace the hardcoded, coupled data pipelines with "Strech".

Data Engineers can have much more visibility over their ingested data, stop working for Kafka, change the upstream with ease.

Data Analysts can control the entire flow and shape of data without a single line of code from the source to the "Tableau".

Use-case examples

Event-driven Applications

"For each support ticket that created - Send the data towards the "Strech" pipeline which configured to change the schema, calculate geolocation, run AI-based sentiment analysis, label the ticket and send to the organization MongoDB"

Real-time data enrichment

Strong data enrichment processes are a key part of building the golden customer record. One dataset by itself, no matter how detailed, doesn’t include every piece of behavioral or transactional data needed to build a comprehensive single view of the customer.

​Examples: Machine learning, Target consumers, Marketing purposes.

Push data

Your "Strech" pipeline can act as the target for API endpoints, release you from the heavy-lifting of creating a suitable backend. It reveals a world of possibilities.

​Examples: RapidAPI users can transform and analyze API payloads using "Strech" pipeline. A target for WIX webhooks and much more.

What are the current integrations?

As we are in our Alpha version, and our focus is on different features, we temporarily support MongoDB and Custom API only for the source/destination section and as for data analysis - We provide both IDE for uploading python functions, AI-based sentiment analysis and AI-based urgency analysis.

In our next version which will be available soon, much more integrations will be available including Redis, AWS S3, Postgres, and more analysis functions.

What are the types of data collection (Batch/Aggregation or Stream)?

"Strech" offers both batch and streaming pipelines under the same platform. Under batch pipelines, It's important to know that "Strech" decided to provide 99.99995% data integrity which means that no data will be sent to the destination unless the entire batch has been processed. On the other hand, when choosing a streaming pipeline, every chunk of data ingested into the platform will be handled individually.

Can I push data towards my pipeline at "Strech"?

One of our biggest strengths :) When creating a push stream type of pipeline, at the end of the pipeline's configuration you will receive a URL.

For example https://wix.app.strech.io/pipeline/17gsd-4234nkd-2342039f-sdfs234

You may configure this URL as your API endpoint for webhooks, logs, stream destinations, events, etc...

Types of pipeline triggers

At the moment we support manual and cron-based schedule triggers. Our next versions will allow triggering a pipeline using an API call.

What is the pricing model?

Simple usage also means a simple pricing model. Each user starts with a free amount of 500 objects which means - 500 objects which can be transferred through "Strech". Each ingested data object, no matter the size of the object itself - lower the balance by one object. An object can be a JSON item within a JSON file, a NoSQL document, SQL row, single image, row within a CSV file. When a larger balance is required, The user can purchase an object package which increases the balance by the purchased amount.

Example: My balance posses 10,000 objects. I ran a pipeline that ingested 5,000 NoSQL documents and performs an analysis function that deducts 50% of the documents and afterward ships the left data towards the destination. My updated balance would be 5,000 objects and not 7,500.