I'm using LNDB to transfer data to SQL Server. LNDB is a new product so I'm curious how other developers are implementing it.
LNDB is a great way to get data into SQL Server but I haven't yet decided the best way to manage the data/tables. At first, I figured the tables created by LNDB (in SQL Server) would become the main data tables in my database but now I'm starting to think they might be more of a collection area that gets copied to another table. The main issue I have with the LNDB tables (that get created in SQL Server) is I don't have complete control of them. For example,
1. If I add a new column with CRBasic, that column is automatically added to the table in SQL Server but the RecNum field is reset to 0 (another option is to archive the table).
2. If the structure of a column is changed (e.g. FP2 to UINT2), the table in SQL Server is archived and a new table is created.
These actions could make it difficult to manage and query the data over time.
So, my plan is to copy data from the LNDB tables into my own tables (via a trigger or a SQL Server job). I found out triggers don't work with LNDB tables so I'm going to use a job (a request for trigger functionality in LNDB has been sent to Campbell). I set up a flag field via CRBasic as one of my columns for the table in SQL Server. The SQL Server job will use that column to copy new records to my table (and then update the column to indicate what records have been processed).
Any other methods of LNDB implementation would be appreciated.
Bill
新对话如下:
As a member of the team that worked on LNDB, we too are very interested in ideas that others have here. What would you want to occur when datalogger table definitions change.
LNDB currently requires that all datalogger table columns have a corresponding database table column. There are currently 3 options when table definitions change:
1. Stop Storing Data (The user can manually alter/archive the database table)
2. Archive Database Table
3. Modify Database Table (new columns will be added, old columns orphaned. If the database column's datatype is not what is expected, an archive will occur).
* Last updated by: kwestwood on 4/21/2010 @ 10:24 AM *
新对话如下:
I have read that the best primary key for a table is a one-column integer (int or bigint, etc.)... this provides a key that is quickly indexed and efficiently linked to other tables. It would be great if the primary key on the LNDB tables was just RecNum... meaning the auto-increment of that column would never got reset.
Adding columns to a table is very common, when this happens, it seems LNDB could send an 'alter table' to SQL Server to add the column (without resetting RecNum).
I'm guessing removing columns could be handled the same way.
Regarding the way columns are currently removed, I think they remain in the table and are filled with nulls going forward. That makes sense if you would like to retain the historical data but sometimes the data might no longer be needed so an option to completely remove the column would be nice.
It gets tricky when the structure of a column changes (e.g. FP2 to UINT2) because the data in the old structure might not fit into the new structure. I think the action of changing the structure currently requires a table archive. Maybe another option would be to retain the old column and add a new column for data going forward. Then I could possibly write a program to convert the data from the old column to the new column (with the option of removing the old column... as noted above).
The goal of all these suggestions is to end up with one, clean, manageable table in SQL Server (one table per collection)... no archives, no orphan columns, and a one-column primary key. That would allow the LNDB tables to become the 'data' source tables in the database (with optimized queries, relations to other tables, etc.).
I know this is complicated stuff but you asked for my feedback!
Thanks
Bill
BillA's comment on this is almost exactly what I am dealing with at this moment.
Currently, we are installing 48 cr1000 powered units across a very wide area. Each unit has 4 tables. Instead of LNDB creating 4 tables and funneling all the data into these four, it will by the time the installation is done, have amassed an unmanageable 192 tables in this database.
This causes other serious side effects when it comes to querying for a specific record. It seems the logical thing to do would be to build a feature into LNDB that will allow the user the option to have lndb use 1 table for multiple dataloggers that have the same datatable schema. This table will have an incremental integer primary key for easy querying. Is this kind of functionality possible?