Column-Oriented Datalog Materialization for Large Knowledge Graphs

From korrekt.org

(Difference between revisions)
Jump to: navigation, search
(Created page with "{{publication |author1=Jacopo Urbani |author2=Ceriel J. H. Jacobs |author3=Markus Krötzsch |title=Column-Oriented Datalog Materialization for Large Knowledge Graphs |where=AAAI2...")
 
Line 23: Line 23:
     year      = {2016}
     year      = {2016}
}
}
-
<!--|pdf=http://www.aifb.uni-karlsruhe.de/WBS/sru/TR-rudolph-OBDD4SHIQ.pdf-->
+
|pdf=http://korrekt.org/papers/Urbani-Jacobs-Kroetzsch_Vlog-datalog-materialization-AAAI2016.pdf
|abstract=The evaluation of Datalog rules over large Knowledge Graphs (KGs) is essential for many applications. In this paper, we present a new method of materializing Datalog inferences, which combines a column-based memory layout with novel optimization methods that avoid redundant inferences at runtime. The pro-active caching of certain subqueries further increases efficiency. Our empirical evaluation shows that this approach can often match or even surpass the performance of state-of-the-art systems, especially under restricted resources.
|abstract=The evaluation of Datalog rules over large Knowledge Graphs (KGs) is essential for many applications. In this paper, we present a new method of materializing Datalog inferences, which combines a column-based memory layout with novel optimization methods that avoid redundant inferences at runtime. The pro-active caching of certain subqueries further increases efficiency. Our empirical evaluation shows that this approach can often match or even surpass the performance of state-of-the-art systems, especially under restricted resources.
}}
}}

Latest revision as of 20:41, 3 July 2016


Jacopo Urbani, Ceriel J. H. Jacobs, Markus Krötzsch

Column-Oriented Datalog Materialization for Large Knowledge Graphs



Abstract. The evaluation of Datalog rules over large Knowledge Graphs (KGs) is essential for many applications. In this paper, we present a new method of materializing Datalog inferences, which combines a column-based memory layout with novel optimization methods that avoid redundant inferences at runtime. The pro-active caching of certain subqueries further increases efficiency. Our empirical evaluation shows that this approach can often match or even surpass the performance of state-of-the-art systems, especially under restricted resources.

Published at AAAI2016 (Conference paper)

Download PDF (last update: July 2 2016)

Citation details


Topics

Rule languages

Personal tools