The added value of Ontotext Platform is the declarative approach for access and management of large-scale knowledge graphs (KG). This allows engineering teams to define specific GraphQL interfaces to read and write data over parts of a knowledge graph and let the Platform implement an efficient translation of GraphQL to SPARQL.
Ontotext Platform 3.4 combines the power of GraphDB, Elasticsearch and GraphQL by enabling the definition, automatic synchronization and querying of indices to boost the performance of specific queries. The Workbench front-end tool of the Platform features a new generic search interface for KG exploration and navigation. The new version of the Semantic Object service delivers up to 10 times better performance to execute big and data-intensive GraphQL queries on top of GraphDB.
Semantic Search Service
Ontotext Platform now extends its capabilities with a major new component: a Semantic Search Service. It enables software engineers to easily accomplish some of the capabilities over a knowledge graph that are most required by SMEs such as Full-text Search (FTS), Auto-complete/typeahead (related concepts and controlled vocabulary), Auto-suggest (related keywords and phrases), Faceted search, complex dashboards using different statistical and/or bucket aggregations, etc.
Consequently, software engineers will be able to use a well-defined and rich GraphQL endpoint providing a GraphQL representation of the Semantic Search objects and a large amount of Elasticsearch features, which follow the Elasticsearch query syntax as closely as possible. The provided GraphQL endpoint will enable users not only to search in the data but also to retrieve the data for the result list directly from Elasticsearch.
All components of the new service are dockerized and available for Kubernetes, manageable with predefined Helm charts.
Workbench improvements
Ontotext Platform 3.4 introduces an auto-configurable search page that provides Full-text Search (FTS), Auto-complete (related concepts and controlled vocabulary) and Faceted search over the knowledge graph.
Performance and memory improvements
The new release significantly reduces the Ontotext Platform overhead in terms of query time and memory footprint. With some additional optimizations, the overall memory footprint of the Platform is now reduced in half.
To evaluate the new Semantic Search Service, follow our Quick Start guide where you can find an example of a docker-compose file to run the service as well as links to a more detailed guide on how to set licenses, load example datasets and schema.
If you want to try Ontotext Platform, request a license now!
Check Out The New Martech Cube Podcast. For more such updates follow us on Google News Martech News