You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

User:AndreaWest/WDQS Testing/Running TFT: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>AndreaWest
No edit summary
imported>AndreaWest
Line 1: Line 1:
In order to execute the Tests for Triplestore (TFT) [https://github.com/BorderCloud/TFT codebase] directly on a local installation of a triple store DB (and without docker and jmeter, which are not mandatory), changes were made to the code and test definitions. This page explains the changes, as well as providing references to all the backing code. Also included are the steps to execute the tests, using a Stardog DB for the example.
In order to execute the Tests for Triplestore (TFT) [https://github.com/BorderCloud/TFT codebase] on a local installation of a database (and without docker and jmeter), changes were made to the code and test definitions. This page explains the changes, as well as providing references to all the backing code. Also included are the steps to execute the tests, using a Stardog DB for the example, and details on how to extend them.


== Testing Overview ==
== Testing Overview ==
The TFT infrastructure and tests that are discussed below were forked from the "master" branch (not the default, "withJMeter" branch) of the [https://github.com/BorderCloud/rdf-tests BorderCloud/rdf-tests repository]. The SPARQL 1.1 tests that are referenced are the ones defined by the W3C, and forked from the [https://github.com/w3c/rdf-tests W3C RDF Test repository].  
The TFT infrastructure was forked from the "master" branch (not the default, "withJMeter" branch) of the [https://github.com/BorderCloud/TFT BorderCloud repository]. The tests were also forked from BorderCloud, from the [https://github.com/BorderCloud/rdf-tests rdf-tests repository]. These tests are the ones defined by the W3C and were originally forked from the [https://github.com/w3c/rdf-tests W3C RDF Test repository]. The new repositories are the:
* [https://github.com/AndreaWesterinen/TFT TFT codebase]
* [https://github.com/AndreaWesterinen/rdf-tests RDF tests]
* [https://github.com/AndreaWesterinen/GeoSPARQLBenchmark-Tests GeoSPARQL tests]


Minor changes were made to the RDF test definitions. Specifically, the manifest*.ttls in the sub-directories of rdf-tests/sparql11/data-sparql11 were updated. Those files make reference to SPARQL query, TTL/RDF and other text files (used as inputs and outputs to validate test results) using an IRI declaration (left and right carets) that only specifies a file name with no explicit namespace (but a default namespace is defined in the Turtle file).
Minor changes were made to the RDF test definitions. Specifically, the manifest*.ttls in the sub-directories of rdf-tests/sparql11/data-sparql11 were updated. Those files make reference to SPARQL query, TTL/RDF and other text files (used as inputs and outputs to validate test results) using an IRI declaration (left and right carets) that only specify a file name with no explicit namespace (but a default namespace is defined in the Turtle file).


Since the IRI is simply a file name (with no authority such as http:, file:, etc.), some data stores may have unpredictable behavior when handling the file names. The references to the file names were updated to change the format from (for example) "qt:query <projexp01.rq>" to: "qt:query :projexp01.rq", to explicitly use the default namespace specified in the Turtle.
Since the IRI is simply a file name (with no authority such as http:, file:, etc.), some data stores may have unpredictable behavior when handling the references. For this reason, the triples in the test definitions were updated to change the format from (for example) "<some_test_iri> qt:query <query_for_test.rq>" to: "<some_test_iri> qt:query :query_for_test.rq", to explicitly use the default namespace specified in the Turtle.


The code behind these changes can be found in the [https://github.com/AndreaWesterinen/rdf-tests/blob/master/FixTTL.ipynb FixTTL Jupyter notebook] in [https://github.com/AndreaWesterinen/rdf-tests AndreaWesterinen/rdf-tests this repository].
The code behind these changes can be found in the [https://github.com/AndreaWesterinen/rdf-tests/blob/master/FixTTL.ipynb FixTTL Jupyter notebook] in [https://github.com/AndreaWesterinen/rdf-tests the updated RDF tests repository].


As regards the GeoSPARQL tests, the BorderCloud tests were not used since they were not complete. Instead, the tests from the [https://github.com/OpenLinkSoftware/GeoSPARQLBenchmark GeoSPARQL Benchmark repository] were utilized. That repository was forked to create [https://github.com/AndreaWesterinen/GeoSPARQLBenchmark-Tests this repository]. The test data and a subset of the test definitions are included and defined using the TFT format.  
As regards the GeoSPARQL tests, the BorderCloud tests were not used since they were not complete. Instead, the tests from the [https://github.com/OpenLinkSoftware/GeoSPARQLBenchmark GeoSPARQL Benchmark repository] were utilized. That repository was forked to create [https://github.com/AndreaWesterinen/GeoSPARQLBenchmark-Tests the repo noted above]. The test data and a subset of the test definitions are included, and are defined using the TFT format. The specific GeoSPARQL tests that are included are specified in the README.md of the repository and are shown when the GitHub page is accessed.


To move the tests from the original repository's HOBBIT test infrastructure to TFT required defining:
To move the tests from the original repository's test infrastructure to TFT required defining:
* A manifest-all.ttl to indicate the test inputs and outputs
* A manifest-all.ttl to indicate the test inputs and outputs
* A directory structure aligned with manifest-all holding the test queries (.rq files) and possible test results (.srx files)
* A directory structure (aligned with manifest-all) holding the test queries (.rq files) and possible test results (.srx files)
** Note that the .rq and .srx files are unmodified from the original repository
* The names of any ''alternative'' result files that did not include the text, '-alternative-', were modified to do so
 
** ''Alternative'' result files are explained in the paper, [https://www.mdpi.com/2220-9964/10/7/487 A GeoSPARQL Compliance Benchmark], in section 3.4.3
In order to process the GeoSPARQL test results, an additional change was required to the Test.php processing. The GeoSPARQL tests
*** For example, testing of GeoSPARQL Requirement 9 was defined to use the files:
**** query-r09-4.rq, the query
**** query-r09-4.srx, the first alternative result file (which was renamed to query-r09-4-alternative-2.srx)
**** query-r09-4-alternative-1.srx, the second alternative results file
*** The renaming enabled easier result processing in Test.php, which is discussed in more detail in the [https://wikitech.wikimedia.org/w/index.php?title=User:AndreaWest/WDQS_Testing/#Code_Modifications Code Modifications section] below
* Note that other than the name changes above, the .rq and .srx files are unmodified from the original repository


=== Incorporating the Tests Using Git Submodules ===
=== Incorporating the Tests Using Git Submodules ===
Both the BorderCloud and updated TFT repositories incorporate tests using ''git submodules''. Therefore, if the tests are updated in either the RDF or GeoSPARQL repositories, the changes have to be incorporated/merged into the TFT repository. This is accomplished by the following instructions:
<nowiki>cd mysubmoduledir
git submodule update --init --remote
cd TFTtopleveldir
git diff    # should show changes to the mysubmoduledir
git add mysubmoduledir
git commit -m "Updated submodule"</nowiki>


== Code Modifications ==
== Code Modifications ==
The TFT codebase was modified to not require external databases or Docker. The goal was to make minimal changes to the test infrastructure.  
The TFT codebase was modified to not require external databases or Docker, and to allow tests to be pulled from a local file server (for example, a directory published as a simple HTTP server) or from a different test repository. The goal was to make minimal changes to the infrastructure.  


The following files were updated and are available in the [https://github.com/AndreaWesterinen/TFT AndreaWesterinen/TFT repository]. This is the directory that is cloned in the instructions below.
The following files were updated and are available in the [https://github.com/AndreaWesterinen/TFT AndreaWesterinen/TFT repository]. This is the directory that is cloned in the instructions below.
* config.ini  
* config.ini  
** Updated to only test "standard" SPARQL 1.1, to reference the appropriate files in the ''tests'' repository, to add a new listTestSuite entry, and to reference the databases to be used in SERVICE references
** Updated to test "standard" SPARQL 1.1, to reference the correct repository and local path for tests, to add a new listTestSuite entry (with the W3C SPARQL test location), and to reference the location of the databases to be used in SERVICE queries
** The original entries from the file are commented out using a beginning semi-colon (";")
** The original entries from the file are commented out using a beginning semi-colon (";")
** Note that without the new listTestSuite entry, when running ''php ./tft'', many of the tests were unable to locate the appropriate input/output files
** Note that without the new listTestSuite entry, when running ''php ./tft'', many of the tests were unable to locate the appropriate input/output files
*** Although not elegant, this was the fastest and easiest solution to the problem
*** Although not elegant, this was the fastest and easiest solution to the problem
* AbstractTest.php, Test.php, TestSuite.php and Tools.php
* AbstractTest.php, Test.php and Tools.php
** Updated to execute against local directories and files accessed using a simple HTTP server
** Where file names used the default namespaces in the manifest*.ttl files (for example, "@prefix : <http://www.w3.org/2009/sparql/docs/tests/data-sparql11/update-silent/manifest#> ."), the reference to "manifest#" is removed using str_replace()
** Where file names used the default namespaces in the manifest*.ttl files (for example, "@prefix : <http://www.w3.org/2009/sparql/docs/tests/data-sparql11/update-silent/manifest#> ."), the reference to "manifest#" had to be removed
** (For Test.php) Requests to the SERVICE endpoints to load data required the addition of "update" to the SPARQL endpoint addresses
** (For Test.php) Requests to the SERVICE endpoints to load data required the addition of "update" to the addresses
*** These changes were made to the clearAllTriples() and importGraphInput() functions
*** There was no CLI option for ''php ./tft'' to specify different update and query endpoints, as was possible for the test suite and test databases
*** There was no CLI option for ''php ./tft'' to specify different update and query endpoints, as was possible for the test suite and test databases
* tft-testsuite
** (For Test.php) Test evaluation required checking multiple "alternative" result files
** Modified to use the config-testsuite.ini file
*** Changes were made to the checkResult() function
* New config-testsuite.ini created
*** xxx
** A copy of config.ini that removes the second listTestSuite reference which causes errors in test suite creation
* tft and tft-testsuite
** Clarified the 'usage' text and error messages


== Executing the Tests ==
== Executing the Tests ==

Revision as of 00:48, 17 May 2022

In order to execute the Tests for Triplestore (TFT) codebase on a local installation of a database (and without docker and jmeter), changes were made to the code and test definitions. This page explains the changes, as well as providing references to all the backing code. Also included are the steps to execute the tests, using a Stardog DB for the example, and details on how to extend them.

Testing Overview

The TFT infrastructure was forked from the "master" branch (not the default, "withJMeter" branch) of the BorderCloud repository. The tests were also forked from BorderCloud, from the rdf-tests repository. These tests are the ones defined by the W3C and were originally forked from the W3C RDF Test repository. The new repositories are the:

Minor changes were made to the RDF test definitions. Specifically, the manifest*.ttls in the sub-directories of rdf-tests/sparql11/data-sparql11 were updated. Those files make reference to SPARQL query, TTL/RDF and other text files (used as inputs and outputs to validate test results) using an IRI declaration (left and right carets) that only specify a file name with no explicit namespace (but a default namespace is defined in the Turtle file).

Since the IRI is simply a file name (with no authority such as http:, file:, etc.), some data stores may have unpredictable behavior when handling the references. For this reason, the triples in the test definitions were updated to change the format from (for example) "<some_test_iri> qt:query <query_for_test.rq>" to: "<some_test_iri> qt:query :query_for_test.rq", to explicitly use the default namespace specified in the Turtle.

The code behind these changes can be found in the FixTTL Jupyter notebook in the updated RDF tests repository.

As regards the GeoSPARQL tests, the BorderCloud tests were not used since they were not complete. Instead, the tests from the GeoSPARQL Benchmark repository were utilized. That repository was forked to create the repo noted above. The test data and a subset of the test definitions are included, and are defined using the TFT format. The specific GeoSPARQL tests that are included are specified in the README.md of the repository and are shown when the GitHub page is accessed.

To move the tests from the original repository's test infrastructure to TFT required defining:

  • A manifest-all.ttl to indicate the test inputs and outputs
  • A directory structure (aligned with manifest-all) holding the test queries (.rq files) and possible test results (.srx files)
  • The names of any alternative result files that did not include the text, '-alternative-', were modified to do so
    • Alternative result files are explained in the paper, A GeoSPARQL Compliance Benchmark, in section 3.4.3
      • For example, testing of GeoSPARQL Requirement 9 was defined to use the files:
        • query-r09-4.rq, the query
        • query-r09-4.srx, the first alternative result file (which was renamed to query-r09-4-alternative-2.srx)
        • query-r09-4-alternative-1.srx, the second alternative results file
      • The renaming enabled easier result processing in Test.php, which is discussed in more detail in the Code Modifications section below
  • Note that other than the name changes above, the .rq and .srx files are unmodified from the original repository

Incorporating the Tests Using Git Submodules

Both the BorderCloud and updated TFT repositories incorporate tests using git submodules. Therefore, if the tests are updated in either the RDF or GeoSPARQL repositories, the changes have to be incorporated/merged into the TFT repository. This is accomplished by the following instructions:

cd mysubmoduledir
git submodule update --init --remote
cd TFTtopleveldir
git diff    # should show changes to the mysubmoduledir
git add mysubmoduledir
git commit -m "Updated submodule"

Code Modifications

The TFT codebase was modified to not require external databases or Docker, and to allow tests to be pulled from a local file server (for example, a directory published as a simple HTTP server) or from a different test repository. The goal was to make minimal changes to the infrastructure.

The following files were updated and are available in the AndreaWesterinen/TFT repository. This is the directory that is cloned in the instructions below.

  • config.ini
    • Updated to test "standard" SPARQL 1.1, to reference the correct repository and local path for tests, to add a new listTestSuite entry (with the W3C SPARQL test location), and to reference the location of the databases to be used in SERVICE queries
    • The original entries from the file are commented out using a beginning semi-colon (";")
    • Note that without the new listTestSuite entry, when running php ./tft, many of the tests were unable to locate the appropriate input/output files
      • Although not elegant, this was the fastest and easiest solution to the problem
  • AbstractTest.php, Test.php and Tools.php
    • Where file names used the default namespaces in the manifest*.ttl files (for example, "@prefix : <http://www.w3.org/2009/sparql/docs/tests/data-sparql11/update-silent/manifest#> ."), the reference to "manifest#" is removed using str_replace()
    • (For Test.php) Requests to the SERVICE endpoints to load data required the addition of "update" to the SPARQL endpoint addresses
      • These changes were made to the clearAllTriples() and importGraphInput() functions
      • There was no CLI option for php ./tft to specify different update and query endpoints, as was possible for the test suite and test databases
    • (For Test.php) Test evaluation required checking multiple "alternative" result files
      • Changes were made to the checkResult() function
      • xxx
  • tft and tft-testsuite
    • Clarified the 'usage' text and error messages

Executing the Tests

The following execution example uses a local copy of the Stardog server (which was already installed on my laptop) to test the changes and process.

  • Start the triple store with security disabled
    • With security enabled, accessing the SERVICE endpoints resulted in permission errors. The php ./tft code does not allow the specification of the SERVICE endpoints' user names and passwords (as it does for the test details and tested databases). In lieu of addressing this problem, the shortcut of disabling security was taken.
    • Using the command below, Stardog is accessible as localhost at port 5820
stardog-admin server start --bind 127.0.0.1 --disable-security
  • Set up the necessary data stores in the triple store
    • The example* stores represent databases accessed as SERVICEs
    • The tft-tests database holds the test details and results
    • The tst-stardog data store is the database being tested
stardog-admin db create -n example
stardog-admin db create -n example1
stardog-admin db create -n example2
stardog-admin db create -n tft-tests
stardog-admin db create -n tft-stardog
  • Get the TFT codebase and RDF tests
git clone --recursive https://github.com/AndreaWesterinen/TFT
  • Move to the TFT directory just created
cd TFT
  • Install the BorderCloud SPARQL client (which requires composer)
composer install
  • Move to the TFT/tests directory and start a local HTTP server for access to the test files
    • These files are accessed during the test suite setup (when running php ./tft-testsuite) and to access the SERVICE endpoint data (when running php ./tft)
cd tests
python3 -m http.server 8080
  • Load the tests into the tft-tests data store
php ./tft-testsuite -a -q 'http://localhost:5820/tft-tests/query' -u 'http://localhost:5820/tft-tests/update'
  • If everything is running correctly, you should see output similar to:
Configuration about tests :
- Endpoint type        : standard
- Endpoint query       : http://localhost:5820/tft-tests/query
- Endpoint update      : http://localhost:5820/tft-tests/update
- Mode install all     : ON
- Test suite : URL     :
- Test suite : folder  :
- Mode verbose         : OFF
- Mode debug           : OFF
============ CLEAN GRAPH <https://bordercloud.github.io/rdf-tests/sparql11/data-sparql11/>
Before to clean : 0 triples
After to clean : 0 triples
=================================================================
Start to init the dataset via URL
......................................
38 new graphs
  • Execute the tests (note the definition of the tested software name, tag and description)
php ./tft -q 'http://localhost:5820/tft-tests/query' -u 'http://localhost:5820/tft-tests/update' -tq http://localhost:5820/tft-stardog/query -tu http://localhost:5820/tft-stardog/update -o ./junit -r urn:results --softwareName="Stardog" --softwareDescribeTag=v7.9.1 --softwareDescribe=7.9.1-test
  • You should see output similar to what is listed directly below. There are a few items to note:
    • The results use the convention, '.' for success, 'F' for failure, 'E' for some error, 'S' for skipped
    • The Protocol Tests do not execute correctly since their "action" predicates are commented out. They will fail.
    • The large number of tests marked as "skipped" in the QueryEvaluationTest are caused by TFT infrastructure errors related to entailment. These tests are not currently relevant to Wikidata and will not present a problem.
    • The tests that reference "http://www.w3.org/2009/sparql/docs/tests/data-sparql11/" (in the latter part of the output) are an artifact of the config.ini file, as noted in the section above. The second set of test results can be ignored.
Configuration about tests :
- Graph of output EARL : urn:results2
- Output of tests      : ./junit
- Endpoint type        : standard
- Endpoint query       : http://localhost:5820/tft-tests/query
- Endpoint update      : http://localhost:5820/tft-tests/update
- TEST : Endpoint type        : standard
- TEST : Endpoint query       : http://localhost:5820/tft-stardog/query
- TEST : Endpoint update      : http://localhost:5820/tft-stardog/update
- Mode verbose         : OFF
- Mode debug           : OFF
==================================================================
TEST : https://bordercloud.github.io/rdf-tests/sparql11/data-sparql11/

		TESTS : ProtocolTest
.Nb tests : 3
FFF
--------------------------------------------------------------------
TESTS : PositiveSyntaxTest
.Nb tests : 63
F.................................F.FF.........................

--------------------------------------------------------------------
TESTS : NegativeSyntaxTest
.Nb tests : 43
...........................................

--------------------------------------------------------------------
TESTS : QueryEvaluationTest.Nb tests : 252
...........................................................................................FESESESSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS.....F.......F.......................................................................................................................F.F.................................................................................F.F....ESESESESESES.....F.F......................

		TESTS : CSVResultFormatTest
.Nb tests : 3
ESESES
		TESTS : UpdateEvaluationTest
.Nb tests : 93
.........................................................................F...................
		TESTS : PositiveUpdateSyntaxTest
.Nb tests : 42
.........F..........F..................F..
		TESTS : NegativeUpdateSyntaxTest
.Nb tests : 13
.........FF.F
 END TESTS
==================================================================
TEST : http://www.w3.org/2009/sparql/docs/tests/data-sparql11/

		TESTS : ProtocolTest
.Nb tests : 0

--------------------------------------------------------------------
TESTS : PositiveSyntaxTest
.Nb tests : 0


--------------------------------------------------------------------
TESTS : NegativeSyntaxTest
.Nb tests : 0


--------------------------------------------------------------------
TESTS : QueryEvaluationTest.Nb tests : 0


		TESTS : CSVResultFormatTest
.Nb tests : 0

		TESTS : UpdateEvaluationTest
.Nb tests : 0

		TESTS : PositiveUpdateSyntaxTest
.Nb tests : 0

		TESTS : NegativeUpdateSyntaxTest
.Nb tests : 0

 END TESTS
  • To determine the final results, execute the query below
    • Note that the graph name is the one specified with the -r option in the php ./tft instruction above
stardog query execute tft-tests "prefix earl: <http://www.w3.org/ns/earl#>
SELECT ?out (COUNT(DISTINCT ?assertion) AS ?cnt)
WHERE
{
        GRAPH <urn:results> {
                ?assertion a earl:Assertion.
                ?assertion earl:test ?test.
                ?assertion earl:result ?result.
                ?result earl:outcome ?out .
        }
} GROUP BY ?out"
  • Results will be reported as shown:
+------------------------------------+-------+
|                out                 |  cnt  |
+------------------------------------+-------+
| http://www.w3.org/ns/earl#passed   | 681   |
| http://www.w3.org/ns/earl#failed   | 23    |
| http://www.w3.org/ns/earl#error    | 12    |
| http://www.w3.org/ns/earl#untested | 152   |
+------------------------------------+-------+

Query returned 4 results in 00:00:00.136
  • To see the tests which failed, execute this query:
stardog query tft-tests "prefix earl: <http://www.w3.org/ns/earl#>
select distinct ?s where {
        GRAPH <urn:results> { ?s earl:outcome earl:failed  }
}"
+----------------------------------------------------------------------------------+
|                                        s                                         |
+----------------------------------------------------------------------------------+
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/protocol/manifest#query_g |
| et/Protocol/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/protocol/manifest#query_p |
| ost_form/Protocol/2022-05-09T20:03:31+00:00                                      |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/protocol/manifest#update_ |
| post_form/Protocol/2022-05-09T20:03:31+00:00                                     |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-fed/manifest#test_ |
| 1/Syntax/2022-05-09T20:03:31+00:00                                               |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-query/manifest#tes |
| t_4/Syntax/2022-05-09T20:03:31+00:00                                             |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-query/manifest#tes |
| t_41/Syntax/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-query/manifest#tes |
| t_42/Syntax/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/construct/manifest#constr |
| uctwhere04/Response/2022-05-09T20:03:31+00:00                                    |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/exists/manifest#exists03/ |
| Response/2022-05-09T20:03:31+00:00                                               |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/functions/manifest#bnode0 |
| 1/Response/2022-05-09T20:03:31+00:00                                             |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/json-res/manifest#jsonres |
| 01/Response/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/json-res/manifest#jsonres |
| 02/Response/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/property-path/manifest#pp |
| 34/Response/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/property-path/manifest#pp |
| 35/Response/2022-05-09T20:03:31+00:00                                            |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/subquery/manifest#subquer |
| y02/Response/2022-05-09T20:03:31+00:00                                           |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/subquery/manifest#subquer |
| y03/Response/2022-05-09T20:03:31+00:00                                           |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/drop/manifest#dawg-drop-n |
| amed-01/Response/2022-05-09T20:03:31+00:00                                       |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest# |
| test_18/Response/2022-05-09T20:03:31+00:00                                       |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest# |
| test_28/Response/2022-05-09T20:03:31+00:00                                       |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest# |
| test_8/Response/2022-05-09T20:03:31+00:00                                        |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest# |
| test_50/Syntax/2022-05-09T20:03:31+00:00                                         |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest# |
| test_51/Syntax/2022-05-09T20:03:31+00:00                                         |
| http://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest# |
| test_54/Syntax/2022-05-09T20:03:31+00:00                                         |
+----------------------------------------------------------------------------------+

Query returned 23 results in 00:00:00.129

Getting More Information Using Verbose Mode

xx

How to Extend the Tests

xx

  • In the GeoSPARQL repo + discuss submodule implications
  • New tests overall
  • New data
  • What has to be available for LOAD reference (why input data only?)