2004 PoLiDBMS: Design and Prototype Implementation of a DBMS for Portable Devices

C. Bolchini, C. Curino, M. Giorgetta, A. Giusti, A. Miele, F. A. Schreiber, and L. Tanca, “PoLiDBMS: Design and Prototype Implementation of a DBMS for Portable Devices,” in Proc. of the 12th Italian Symposium on Advanced Database Systems (SEBD 2004), 2004, pp. 166-177.

ABSTRACT:

BIBTEX:

@inproceedings{SEBD2004,

author = {Cristiana Bolchini and Carlo Curino and Marco Giorgetta and Alessandro Giusti and Antonio Miele and Fabio A. Schreiber and Letizia Tanca},
Isbn = {88-901409-1-7},
Booktitle = {Proc. of the 12th Italian Symposium on Advanced Database Systems (SEBD 2004)},
Location = {S. Margherita di Pula, Cagliari, Italy},
Pages = {166-177},
Title = {PoLiDBMS: Design and Prototype Implementation of a DBMS for Portable Devices},
pdf = {sebd2004.pdf},
Year = {2004}
}

2006 Ontology-based Information Tailoring

C. Curino, E. Quintarelli, and L. Tanca, “Ontology-based Information Tailoring,” in Proc. IEEE of 2nd Int. Workshop on Database Interoperability (InterDB 2006), 2006, pp. 5-5.

ABSTRACT:

Current applications are often forced to filter the richness of datasources in order to reduce the information noise the user is subject to. We consider this aspect as a critical issue of applications, to be factorized at the data management level. The Context-ADDICT system, leveraging on ontology-based context and domain models, is able to personalize the data to be made available to the user by “context-aware tailoring”. In this paper we present a formal approach to the definition of the relationship between context (represented by an appropriate context model) and application domain (modeled by a domain ontology). Once such relationship has been defined, we are able to work out the boundary of the portion of the domain relevant to a user in a certain context. We also sketch the implementation of a visual tool supporting the application designer in this modeling task

BIBTEX:

@inproceedings{INTERDB2006,
author = {Carlo Curino and Elisa Quintarelli and Letizia Tanca},
Booktitle = {Proc. IEEE of 2nd Int. Workshop on Database Interoperability (InterDB 2006)},
Keywords = {Contex-ADDICT scenario and architecture},
Location = {Atlanta, USA},
Month = {April},
Title = {Ontology-based Information Tailoring},
Pages = {5-5},
doi = {http://dx.doi.org/10.1109/ICDEW.2006.104},
pdf = {interdb2006.pdf},
Year = {2006}

}

2006 Context integration for mobile data tailoring

C. Bolchini, C. Curino, F. A. Schreiber, and L. Tanca, “Context integration for mobile data tailoring,” in Proc. IEEE/ACM of Int. Conf. on Mobile Data Management, 2006.

ABSTRACT:

Independent, heterogeneous, distributed, sometimes transient and mobile data sources produce an enormous amount of information that should be semantically integrated and filtered, or, as we say, tailored, based on the user’s interests and context. Since both the user and the data sources can be mobile, and the communication might be unreliable, caching the information on the user device may become really useful. Therefore new challenges have to be faced such as: data filtering in a context-aware fashion, integration of not-known-in-advance data sources, automatic extraction of the semantics. We propose a novel system named Context-ADDICT (Context-Aware Data Design, Integration, Customization and Tailoring) able to deal with the described scenario. The system we are designing aims at tailoring the available information to the needs of the current user in the current context, in order to offer a more manageable amount of information; such information is to be cached on the user’s device according to policies defined at design-time, to cope with data source transiency. This paper focuses on the information representation and tailoring problem and on the definition of the global architecture of the system.

 

BIBTEX:

@inproceedings{MDM2006,

author = {Cristiana Bolchini and Carlo Curino and Fabio A. Schreiber and Letizia Tanca},
Booktitle = {Proc. IEEE/ACM of Int. Conf. on Mobile Data Management},
Keywords = {Contex-ADDICT scenario and architecture},
Location = {Nara, Japan},
Month = {May},
Organization = {IEEE, ACM},
Title = {Context integration for mobile data tailoring},
doi = {http://dx.doi.org/10.1109/MDM.2006.52},
pdf = {mdm2006.pdf},
Year = {2006}

}

2007 CADD: a tool for context modeling and data tailoring

C. Bolchini, C. A. Curino, G. Orsi, E. Quintarelli, F. A. Schreiber, and L. Tanca, “CADD: a tool for context modeling and data tailoring,” in Proc. IEEE Intl. Conf. on Mobile Data Management (MDM), 2007, pp. 221-223.

ABSTRACT:

 

The aim of this demonstration is the presentation of (1)
the design methodology, (2) the corresponding design tool
(CADD) and (3) the client-server application that we have
developed to support context-aware data tailoring.

BIBTEX:

@inproceedings{MDM2007,
author = {Cristiana Bolchini and Carlo A. Curino and Giorgio Orsi and Elisa Quintarelli and Fabio A. Schreiber and Letizia Tanca},
Title = {CADD: a tool for context modeling and data tailoring},
Booktitle = {Proc. IEEE Intl. Conf. on Mobile Data Management (MDM)},
Pages = {221-223},
Year = {2007},
doi = {http://doi.dx.org/},
pdf = {mdm2007.pdf}

}

DB2: INSTEAD OF TRIGGERS

Reading about DB2 i discovered an interesting functionality, the INSTEAD OF triggers, which can be used to support insert, update and delete against complex views. I have not investigated their use yet, but they look a powerful mean. I’ll keep you posted on my experiments.
INSTEAD OF TRIGGER:
INSTEAD OF triggers describe how to perform insert, update, and delete operations against views that are too complex to support these operations natively. INSTEAD OF triggers allow applications to use a view as the sole interface for all SQL operations (insert, delete, update and select). Usually, INSTEAD OF triggers contain the inverse of the logic applied in a view body.

DDL TRIGGERS: Oracle is the way.

For a project I’m currently working on I need to use DDL triggers, it seems (at the best of my current understanding) that:

MySQL: no support for DDL triggers yet, they are part of the “remote”
TODOs, we will probable not see them for a long time. ( I tried to
install trigger on the <tt>information_schema</tt> with no results,
since the information_schema is in general a virtual DB.)
DB2: has no direct support for DDL triggers, workaround are possible
using the tracing functionalities http://database.ittoolbox.com/groups/technical-functional/db2-l/ddl-triggers-1147710

. As discussed here also in DB2 triggers on the
<tt>information_schema</tt> are not possible.
Oracle and SQL Server 2005: support DDL triggers.

The server environment we are operating is Linux based, I’ll go for
Oracle.

Here there is a short reference to DDL-triggers in Oracle:
http://www.psoug.org/reference/ddl_trigger.html

 
I will install oracle on the UBUNTU 6.06 LTS server we got at UCLA. This seems a good guide: 
 http://linux.togaware.com/survivor/Oracle_10g.html to install Oracle 10g on
Debian sid. I’m going to use to actually install Oracle 11g on Ubuntu
6.06 LTS.

2007 X-SOM: A Flexible Ontology Mapper


ABSTRACT:

System interoperability is a well known issue, especially for heterogeneous information systems, where ontology-based representations may support automatic and usertransparent integration. In this paper we present X-SOM: an ontology mapping and integration tool. The contribution of our tool is a modular and extensible architecture that automatically combines several matching techniques by means of a neural network, performing also ontology debugging to avoid inconsistencies. Besides describing the tool components, we discuss the prototype implementation, which has been tested against the OAEI 2006 benchmark with promising results.

BIBTEX:

@inproceedings{conf/dexaw/CurinoOT07,
    title = {X-SOM: A Flexible Ontology Mapper.},
    author = {Carlo Curino and Giorgio Orsi and Letizia Tanca},
    booktitle = {DEXA Workshops},
    crossref = {conf/dexaw/2007},
    pages = {424-428},
    publisher = {IEEE Computer Society},
    url = {http://dblp.uni-trier.de/db/conf/dexaw/dexaw2007.html#CurinoOT07},
    year = {2007},
    biburl = {http://www.bibsonomy.org/bibtex/2209656be33e188670e036d7b7db445a6/dblp},
    description = {dblp},
    ee = {http://doi.ieeecomputersociety.org/10.1109/DEXA.2007.175}, isbn = {0-7695-2932-1}, date = {2007-09-18},
    keywords = {dblp }
}

MySQL views: 200 views chain limit?

I’m doing some stress test with MySQL on views…

– i created a chain of very simple views:
CREATE VIEW view1 AS SELECT * FROM `table1`;
CREATE VIEW view2 AS SELECT * FROM `view1`;
CREATE VIEW view3 AS SELECT * FROM `view2`;
….
(table1 contains 1000 records) i created 200 of them and it seems that MySQL manages them very good…
mysql> SELECT empno FROM table1 WHERE birthdate=”1953-04-08″;
2 rows in set (0.00 sec)
mysql> SELECT empno FROM view1 WHERE birthdate=”1953-04-08″;
2 rows in set (0.00 sec)
mysql> SELECT empno FROM view50 WHERE birthdate=”1953-04-08″;
2 rows in set (0.00 sec)
mysql> SELECT empno FROM view100 WHERE birthdate=”1953-04-08″;
2 rows in set (0.00 sec)
mysql> SELECT empno FROM view200 WHERE birthdate=”1953-04-08″;
2 rows in set (0.01 sec)
this were the first executions of the views… the system might have cached something but not a lot…
– then i created other 300 views (all the way to view500)… MySQL tends to panic… lost connection, or sometimes gets extremely slow…
(here i change the date in the WHERE to avoid cache, this is why we have empty set)
mysql> SELECT empno FROM view100 WHERE birthdate=”1953-04-09″;
Empty set (0.00 sec)
mysql> SELECT empno FROM view200 WHERE birthdate=”1953-04-09″;
Empty set (0.01 sec)
mysql> SELECT empno FROM view300 WHERE birthdate=”1953-04-09″;
ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> SELECT empno FROM view300 WHERE birthdate=”1953-04-09″;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect…
Connection id: 1
Current database: viewstresstest
ERROR 2013 (HY000): Lost connection to MySQL server during query


I tested a little further and the problem seems to be in the chain somewhere between 200 and 205 views…
My feeling is that the problem relies in the implementation and is not a theoretical limitation of the scalability, cause the performance drop is too sudden to be intrinsic in the approach…
I posted it on the MySQL forum but I’ve got no answer so far…

If someone plans to repeat the test , this is the script for generating the views:

#!/bin/bash
numberofview=$1;
echo “CREATE VIEW view1 AS SELECT * FROM \`table1\`;”;
for((cv=2;$cv<=$numberofview;cv=$(($cv+1))));
do
cvold=$(($cv-1));
echo “CREATE VIEW view$cv AS SELECT * FROM \`view$cvold\`;”
done