From 3c3be934061b252d17626de1df0cdc0eec983c5e Mon Sep 17 00:00:00 2001 From: DS_Starter Date: Wed, 17 Oct 2018 14:36:27 +0000 Subject: [PATCH] 93_DbRep: contrib 8.3.0 git-svn-id: https://svn.fhem.de/fhem/trunk@17551 2b470e98-0d58-463d-a4d8-8e2adae1ed80 --- fhem/contrib/DS_Starter/93_DbLog.pm | 7349 -------------- fhem/contrib/DS_Starter/93_DbRep.pm | 13849 ++++++++++++++++++++++++++ 2 files changed, 13849 insertions(+), 7349 deletions(-) delete mode 100644 fhem/contrib/DS_Starter/93_DbLog.pm create mode 100644 fhem/contrib/DS_Starter/93_DbRep.pm diff --git a/fhem/contrib/DS_Starter/93_DbLog.pm b/fhem/contrib/DS_Starter/93_DbLog.pm deleted file mode 100644 index 545b7156d..000000000 --- a/fhem/contrib/DS_Starter/93_DbLog.pm +++ /dev/null @@ -1,7349 +0,0 @@ -############################################################################################################################################ -# $Id: 93_DbLog.pm 17374 2018-09-19 20:36:09Z DS_Starter $ -# -# 93_DbLog.pm -# written by Dr. Boris Neubert 2007-12-30 -# e-mail: omega at online dot de -# -# modified and maintained by Tobias Faust since 2012-06-26 -# e-mail: tobias dot faust at online dot de -# -# reduceLog() created by Claudiu Schuster (rapster) -# -# redesigned 2016-2018 by DS_Starter with credits by -# JoeAllb, DeeSpe -# -############################################################################################################################################ -# Versions History done by DS_Starter & DeeSPe: -# -# 3.12.5 12.10.2018 charFilter: "\xB0C" substitution by "°C" added and usage in DbLog_Log changed -# 3.12.4 10.10.2018 return non-saved datasets back in asynch mode only if transaction is used -# 3.12.3 08.10.2018 Log output of recuceLogNbl enhanced, some functions renamed -# 3.12.2 07.10.2018 $hash->{HELPER}{REOPEN_RUNS_UNTIL} contains the time the DB is closed -# 3.12.1 19.09.2018 use Time::Local (forum:#91285) -# 3.12.0 04.09.2018 corrected SVG-select (https://forum.fhem.de/index.php/topic,65860.msg815640.html#msg815640) -# 3.11.0 02.09.2018 reduceLog, reduceLogNbl - optional "days newer than" part added -# 3.10.10 05.08.2018 commandref revised reducelogNbl -# 3.10.9 23.06.2018 commandref added hint about special characters in passwords -# 3.10.8 21.04.2018 addLog - not available reading can be added as new one (forum:#86966) -# 3.10.7 16.04.2018 fix generate addLog-event if device or reading was not found by addLog -# 3.10.6 13.04.2018 verbose level in addlog changed if reading not found -# 3.10.5 12.04.2018 fix warnings -# 3.10.4 11.04.2018 fix addLog if no valueFn is used -# 3.10.3 10.04.2018 minor fixes in addLog -# 3.10.2 09.04.2018 add qualifier CN= to addlog -# 3.10.1 04.04.2018 changed event parsing of Weather -# 3.10.0 02.04.2018 addLog consider DbLogExclude in Devices, keyword "!useExcludes" to switch off considering -# DbLogExclude in addLog, DbLogExclude & DbLogInclude can handle "/" in Readingname, -# commandref (reduceLog) revised -# 3.9.0 17.03.2018 DbLog_ConnectPush state-handling changed, attribute excludeDevs enhanced in DbLog_Log -# 3.8.9 10.03.2018 commandref revised -# 3.8.8 05.03.2018 fix device doesn't exit if configuration couldn't be read -# 3.8.7 28.02.2018 changed DbLog_sampleDataFn - no change limits got fron SVG, commandref revised -# 3.8.6 25.02.2018 commandref revised (forum:#84953) -# 3.8.5 16.02.2018 changed ParseEvent for Zwave -# 3.8.4 07.02.2018 minor fixes of "$@", code review, eval for userCommand, DbLog_ExecSQL1 (forum:#83973) -# 3.8.3 03.02.2018 call execmemcache only syncInterval/2 if cacheLimit reached and DB is not reachable, fix handling of -# "$@" in DbLog_PushAsync -# 3.8.2 31.01.2018 RaiseError => 1 in DbLog_ConnectPush, DbLog_ConnectNewDBH, configCheck improved -# 3.8.1 29.01.2018 Use of uninitialized value $txt if addlog has no value -# 3.8.0 26.01.2018 escape "|" in events to log events containing it -# 3.7.1 25.01.2018 fix typo in commandref -# 3.7.0 21.01.2018 parsed event with Log 5 added, configCheck enhanced by configuration read check -# 3.6.5 19.01.2018 fix lot of logentries if disabled and db not available -# 3.6.4 17.01.2018 improve DbLog_Shutdown, extend configCheck by shutdown preparation check -# 3.6.3 14.01.2018 change verbose level of addlog "no Reading of device ..." message from 2 to 4 -# 3.6.2 07.01.2018 new attribute "exportCacheAppend", change function exportCache to respect attr exportCacheAppend, -# fix DbLog_execmemcache verbose 5 message -# 3.6.1 04.01.2018 change SQLite PRAGMA from NORMAL to FULL (Default Value of SQLite) -# 3.6.0 20.12.2017 check global blockingCallMax in configCheck, configCheck now available for SQLITE -# 3.5.0 18.12.2017 importCacheFile, addCacheLine uses useCharfilter option, filter only $event by charfilter -# 3.4.0 10.12.2017 avoid print out {RUNNING_PID} by "list device" -# 3.3.0 07.12.2017 avoid print out the content of cache by "list device" -# 3.2.0 06.12.2017 change attribute "autocommit" to "commitMode", activate choice of autocommit/transaction in logging -# Addlog/addCacheLine change $TIMESTAMP check, -# rebuild DbLog_Push/DbLog_PushAsync due to bugfix in update current (Forum:#80519), -# new attribute "useCharfilter" for Characterfilter usage -# 3.1.1 05.12.2017 Characterfilter added to avoid unwanted characters what may destroy transaction -# 3.1.0 05.12.2017 new set command addCacheLine -# 3.0.0 03.12.2017 set begin_work depending of AutoCommit value, new attribute "autocommit", some minor corrections, -# report working progress of reduceLog,reduceLogNbl in logfile (verbose 3), enhanced log output -# (e.g. of execute_array) -# 2.22.15 28.11.2017 some Log3 verbose level adapted -# 2.22.14 18.11.2017 create state-events if state has been changed (Forum:#78867) -# 2.22.13 20.10.2017 output of reopen command improved -# 2.22.12 19.10.2017 avoid illegible messages in "state" -# 2.22.11 13.10.2017 DbLogType expanded by SampleFill, DbLog_sampleDataFn adapted to sort case insensitive, commandref revised -# 2.22.10 04.10.2017 Encode::encode_utf8 of $error, DbLog_PushAsyncAborted adapted to use abortArg (Forum:77472) -# 2.22.9 04.10.2017 added hint to SVG/DbRep in commandref -# 2.22.8 29.09.2017 avoid multiple entries in Dopdown-list when creating SVG by group Device:Reading in DbLog_sampleDataFn -# 2.22.7 24.09.2017 minor fixes in configcheck -# 2.22.6 22.09.2017 commandref revised -# 2.22.5 05.09.2017 fix Internal MODE isn't set correctly after DEF is edited, nextsynch is not renewed if reopen is -# set manually after reopen was set with a delay Forum:#76213, Link to 98_FileLogConvert.pm added -# 2.22.4 27.08.2017 fhem chrashes if database DBD driver is not installed (Forum:#75894) -# 2.22.3 11.08.2017 Forum:#74690, bug unitialized in row 4322 -> $ret .= SVG_txt("par_${r}_0", "", "$f0:$f1:$f2:$f3", 20); -# 2.22.2 08.08.2017 Forum:#74690, bug unitialized in row 737 -> $ret .= ($fld[0]?$fld[0]:" ").'.'.($fld[1]?$fld[1]:" "); -# 2.22.1 07.08.2017 attribute "suppressAddLogV3" to suppress verbose3-logentries created by DbLog_AddLog -# 2.22.0 25.07.2017 attribute "addStateEvent" added -# 2.21.3 24.07.2017 commandref revised -# 2.21.2 19.07.2017 changed readCfg to report more error-messages -# 2.21.1 18.07.2017 change configCheck for DbRep Report_Idx -# 2.21.0 17.07.2017 standard timeout increased to 86400, enhanced explaination in configCheck -# 2.20.0 15.07.2017 state-Events complemented with state by using $events = deviceEvents($dev_hash,1) -# 2.19.0 11.07.2017 replace {DBMODEL} by {MODEL} completely -# 2.18.3 04.07.2017 bugfix (links with $FW_ME deleted), MODEL as Internal (for statistic) -# 2.18.2 29.06.2017 check of index for DbRep added -# 2.18.1 25.06.2017 DbLog_configCheck/ DbLog_sqlget some changes, commandref revised -# 2.18.0 24.06.2017 configCheck added (MySQL, PostgreSQL) -# 2.17.1 17.06.2017 fix log-entries "utf8 enabled" if SVG's called, commandref revised, enable UTF8 for DbLog_get -# 2.17.0 15.06.2017 enable UTF8 for MySQL (entry in configuration file necessary) -# 2.16.11 03.06.2017 execmemcache changed for SQLite avoid logging if deleteOldDaysNbl or reduceLogNbL is running -# 2.16.10 15.05.2017 commandref revised -# 2.16.9.1 11.05.2017 set userCommand changed - -# Forum: https://forum.fhem.de/index.php/topic,71808.msg633607.html#msg633607 -# 2.16.9 07.05.2017 addlog syntax changed to "addLog devspec:Reading [Value]" -# 2.16.8 06.05.2017 in valueFN $VALUE and $UNIT can now be set to '' or 0 -# 2.16.7 20.04.2017 fix $now at addLog -# 2.16.6 18.04.2017 AddLog set lasttime, lastvalue of dev_name, dev_reading -# 2.16.5 16.04.2017 DbLog_checkUsePK changed again, new attribute noSupportPK -# 2.16.4 15.04.2017 commandref completed, DbLog_checkUsePK changed (@usepkh = "", @usepkc = "") -# 2.16.3 07.04.2017 evaluate reading in DbLog_AddLog as regular expression -# 2.16.2 06.04.2017 sub DbLog_cutCol for cutting fields to maximum length, return to "$lv = "" if(!$lv);" because -# of problems with MinIntervall, DbLogType-Logging in database cycle verbose 5, make $TIMESTAMP -# changable by valueFn -# 2.16.1 04.04.2017 changed regexp $exc =~ s/(\s|\s*\n)/,/g; , DbLog_AddLog changed, enhanced sort of listCache -# 2.16.0 03.04.2017 new set-command addLog -# 2.15.0 03.04.2017 new attr valueFn using for perl expression which may change variables and skip logging -# unwanted datasets, change DbLog_ParseEvent for ZWAVE, -# change DbLogExclude / DbLogInclude in DbLog_Log to "$lv = "" if(!defined($lv));" -# 2.14.4 28.03.2017 pre-connection check in DbLog_execmemcache deleted (avoid possible blocking), attr excludeDevs -# can be specified as devspec -# 2.14.3 24.03.2017 DbLog_Get, DbLog_Push changed for better plotfork-support -# 2.14.2 23.03.2017 new reading "lastCachefile" -# 2.14.1 22.03.2017 cacheFile will be renamed after successful import by set importCachefile -# 2.14.0 19.03.2017 new set-commands exportCache, importCachefile, new attr expimpdir, all cache relevant set-commands -# only in drop-down list when asynch mode is used, minor fixes -# 2.13.6 13.03.2017 plausibility check in set reduceLog(Nbl) enhanced, minor fixes -# 2.13.5 20.02.2017 check presence of table current in DbLog_sampleDataFn -# 2.13.4 18.02.2017 DbLog_Push & DbLog_PushAsync: separate eval-routines for history & current table execution -# to decouple commit or rollback transactions, DbLog_sampleDataFn changed to avoid fhem from crash if table -# current is not present and DbLogType isn't set -# 2.13.3 18.02.2017 default timeout of DbLog_PushAsync increased to 1800, -# delete {HELPER}{xx_PID} in reopen function -# 2.13.2 16.02.2017 deleteOldDaysNbl added (non-blocking implementation of deleteOldDays) -# 2.13.1 15.02.2017 clearReadings limited to readings which won't be recreated periodicly in asynch mode and set readings only blank, -# eraseReadings added to delete readings except reading "state", -# countNbl non-blocking by DeeSPe, -# rename reduceLog non-blocking to reduceLogNbl and implement the old reduceLog too -# 2.13.0 13.02.2017 made reduceLog non-blocking by DeeSPe -# 2.12.5 11.02.2017 add support for primary key of PostgreSQL DB (Rel. 9.5) in both modes for current table -# 2.12.4 09.02.2017 support for primary key of PostgreSQL DB (Rel. 9.5) in both modes only history table -# 2.12.3 07.02.2017 set command clearReadings added -# 2.12.2 07.02.2017 support for primary key of SQLITE DB in both modes -# 2.12.1 05.02.2017 support for primary key of MySQL DB in synch mode -# 2.12 04.02.2017 support for primary key of MySQL DB in asynch mode -# 2.11.4 03.02.2017 check of missing modules added -# 2.11.3 01.02.2017 make errorlogging of DbLog_PushAsync more identical to DbLog_Push -# 2.11.2 31.01.2017 if attr colEvent, colReading, colValue is set, the limitation of fieldlength is also valid -# for SQLite databases -# 2.11.1 30.01.2017 output to central logfile enhanced for DbLog_Push -# 2.11 28.01.2017 DbLog_connect substituted by DbLog_connectPush completely -# 2.10.8 27.01.2017 DbLog_setinternalcols delayed at fhem start -# 2.10.7 25.01.2017 $hash->{HELPER}{COLSET} in DbLog_setinternalcols, DbLog_Push changed due to -# issue Turning on AutoCommit failed -# 2.10.6 24.01.2017 DbLog_connect changed "connect_cashed" to "connect", DbLog_Get, DbLog_chartQuery now uses -# DbLog_ConnectNewDBH, Attr asyncMode changed -> delete reading cacheusage reliable if mode was switched -# 2.10.5 23.01.2017 count, userCommand, deleteOldDays now uses DbLog_ConnectNewDBH -# DbLog_Push line 1107 changed -# 2.10.4 22.01.2017 new sub DbLog_setinternalcols, new attributes colEvent, colReading, colValue -# 2.10.3 21.01.2017 query of cacheEvents changed, attr timeout adjustable -# 2.10.2 19.01.2017 ReduceLog now uses DbLog_ConnectNewDBH -> makes start of ReduceLog stable -# 2.10.1 19.01.2017 commandref edited, cache events don't get lost even if other errors than "db not available" occure -# 2.10 18.10.2017 new attribute cacheLimit, showNotifyTime -# 2.9.3 17.01.2017 new sub DbLog_ConnectNewDBH (own new dbh for separate use in functions except logging functions), -# DbLog_sampleDataFn, DbLog_dbReadings now use DbLog_ConnectNewDBH -# 2.9.2 16.01.2017 new bugfix for SQLite issue SVGs, DbLog_Log changed to $dev_hash->{CHANGETIME}, DbLog_Push -# changed (db handle new separated) -# 2.9.1 14.01.2017 changed DbLog_ParseEvent to CallInstanceFn, renamed flushCache to purgeCache, -# renamed syncCache to commitCache, attr cacheEvents changed to 0,1,2 -# 2.9 11.01.2017 changed DbLog_ParseEvent to CallFn -# 2.8.9 11.01.2017 own $dbhp (new DbLog_ConnectPush) for synchronous logging, delete $hash->{HELPER}{RUNNING_PID} -# if DEAD, add func flushCache, syncCache -# 2.8.8 10.01.2017 connection check in Get added, avoid warning "commit/rollback ineffective with AutoCommit enabled" -# 2.8.7 10.01.2017 bugfix no dropdown list in SVG if asynchronous mode activated (func DbLog_sampleDataFn) -# 2.8.6 09.01.2017 Workaround for Warning begin_work failed: Turning off AutoCommit failed, start new timer of -# DbLog_execmemcache after reducelog -# 2.8.5 08.01.2017 attr syncEvents, cacheEvents added to minimize events -# 2.8.4 08.01.2017 $readingFnAttributes added -# 2.8.3 08.01.2017 set NOTIFYDEV changed to use notifyRegexpChanged (Forum msg555619), attr noNotifyDev added -# 2.8.2 06.01.2017 commandref maintained to cover new functions -# 2.8.1 05.01.2017 use Time::HiRes qw(gettimeofday tv_interval), bugfix $hash->{HELPER}{RUNNING_PID} -# 2.8 03.01.2017 attr asyncMode, you have a choice to use blocking (as V2.5) or non-blocking asynchronous -# with caching, attr showproctime -# 2.7 02.01.2017 initial release non-blocking using BlockingCall -# 2.6 02.01.2017 asynchron writing to DB using cache, attr syncInterval, set listCache -# 2.5 29.12.2016 commandref maintained to cover new attributes, attr "excludeDevs" and "verbose4Devs" now -# accepting Regex -# 2.4.4 28.12.2016 Attribut "excludeDevs" to exclude devices from db-logging (only if $hash->{NOTIFYDEV} eq ".*") -# 2.4.3 28.12.2016 function DbLog_Log: changed separators of @row_array -> better splitting -# 2.4.2 28.12.2016 Attribut "verbose4Devs" to restrict verbose4 loggings of specific devices -# 2.4.1 27.12.2016 DbLog_Push: improved update/insert into current, analyze execute_array -> ArrayTupleStatus -# 2.4 24.12.2016 some improvements of verbose 4 logging -# 2.3.1 23.12.2016 fix due to https://forum.fhem.de/index.php/topic,62998.msg545541.html#msg545541 -# 2.3 22.12.2016 fix eval{} in DbLog_Log -# 2.2 21.12.2016 set DbLogType only to "History" if attr DbLogType not set -# 2.1 21.12.2016 use execute_array in DbLog_Push -# 2.0 19.12.2016 some improvements DbLog_Log -# 1.9.3 17.12.2016 $hash->{NOTIFYDEV} added to process only events from devices are in Regex -# 1.9.2 17.12.2016 some improvemnts DbLog_Log, DbLog_Push -# 1.9.1 16.12.2016 DbLog_Log no using encode_base64 -# 1.9 16.12.2016 DbLog_Push changed to use deviceEvents -# 1.8.1 16.12.2016 DbLog_Push changed -# 1.8 15.12.2016 bugfix of don't logging all received events -# 1.7.1 15.12.2016 attr procedure of "disabled" changed - -package main; -use strict; -use warnings; -eval "use DBI;1" or my $DbLogMMDBI = "DBI"; -use Data::Dumper; -use Blocking; -use Time::HiRes qw(gettimeofday tv_interval); -use Time::Local; -use Encode qw(encode_utf8); -no if $] >= 5.017011, warnings => 'experimental::smartmatch'; - -my $DbLogVersion = "3.12.5"; - -my %columns = ("DEVICE" => 64, - "TYPE" => 64, - "EVENT" => 512, - "READING" => 64, - "VALUE" => 128, - "UNIT" => 32 - ); - -sub DbLog_dbReadings($@); - -################################################################ -sub DbLog_Initialize($) -{ - my ($hash) = @_; - - $hash->{DefFn} = "DbLog_Define"; - $hash->{UndefFn} = "DbLog_Undef"; - $hash->{NotifyFn} = "DbLog_Log"; - $hash->{SetFn} = "DbLog_Set"; - $hash->{GetFn} = "DbLog_Get"; - $hash->{AttrFn} = "DbLog_Attr"; - $hash->{SVG_regexpFn} = "DbLog_regexpFn"; - $hash->{ShutdownFn} = "DbLog_Shutdown"; - $hash->{AttrList} = "addStateEvent:0,1 ". - "commitMode:basic_ta:on,basic_ta:off,ac:on_ta:on,ac:on_ta:off,ac:off_ta:on ". - "colEvent ". - "colReading ". - "colValue ". - "disable:1,0 ". - "DbLogType:Current,History,Current/History,SampleFill/History ". - "shutdownWait ". - "suppressUndef:0,1 ". - "verbose4Devs ". - "excludeDevs ". - "expimpdir ". - "exportCacheAppend:1,0 ". - "syncInterval ". - "noNotifyDev:1,0 ". - "showproctime:1,0 ". - "suppressAddLogV3:1,0 ". - "asyncMode:1,0 ". - "cacheEvents:2,1,0 ". - "cacheLimit ". - "noSupportPK:1,0 ". - "syncEvents:1,0 ". - "showNotifyTime:1,0 ". - "timeout ". - "useCharfilter:0,1 ". - "valueFn:textField-long ". - "DbLogSelectionMode:Exclude,Include,Exclude/Include ". - $readingFnAttributes; - - # Das Attribut DbLogSelectionMode legt fest, wie die Device-Spezifischen Atrribute - # DbLogExclude und DbLogInclude behandelt werden sollen. - # - Exclude: Es wird nur das Device-spezifische Attribut Exclude beruecksichtigt, - # d.h. generell wird alles geloggt, was nicht per DBLogExclude ausgeschlossen wird - # - Include: Es wird nur das Device-spezifische Attribut Include beruecksichtigt, - # d.h. generell wird nichts geloggt, ausßer dem was per DBLogInclude eingeschlossen wird - # - Exclude/Include: Es wird zunaechst Exclude geprueft und bei Ausschluß wird ggf. noch zusaetzlich Include geprueft, - # d.h. generell wird alles geloggt, es sei denn es wird per DBLogExclude ausgeschlossen. - # Wird es von DBLogExclude ausgeschlossen, kann es trotzdem wieder per DBLogInclude - # eingeschlossen werden. - - - addToAttrList("DbLogInclude"); - addToAttrList("DbLogExclude"); - - $hash->{FW_detailFn} = "DbLog_fhemwebFn"; - $hash->{SVG_sampleDataFn} = "DbLog_sampleDataFn"; - -} - -############################################################### -sub DbLog_Define($@) -{ - my ($hash, $def) = @_; - my @a = split("[ \t][ \t]*", $def); - - return "Error: Perl module ".$DbLogMMDBI." is missing. - Install it on Debian with: sudo apt-get install libdbi-perl" if($DbLogMMDBI); - - return "wrong syntax: define DbLog configuration regexp" - if(int(@a) != 4); - - $hash->{CONFIGURATION} = $a[2]; - my $regexp = $a[3]; - - eval { "Hallo" =~ m/^$regexp$/ }; - return "Bad regexp: $@" if($@); - - $hash->{REGEXP} = $regexp; - $hash->{VERSION} = $DbLogVersion; - $hash->{MODE} = AttrVal($hash->{NAME}, "asyncMode", undef)?"asynchronous":"synchronous"; # Mode setzen Forum:#76213 - $hash->{HELPER}{OLDSTATE} = "initialized"; - - # nur Events dieser Devices an NotifyFn weiterleiten, NOTIFYDEV wird gesetzt wenn möglich - notifyRegexpChanged($hash, $regexp); - - #remember PID for plotfork - $hash->{PID} = $$; - - # CacheIndex für Events zum asynchronen Schreiben in DB - $hash->{cache}{index} = 0; - - # read configuration data - my $ret = DbLog_readCfg($hash); - if ($ret) { - # return on error while reading configuration - Log3($hash->{NAME}, 1, "DbLog $hash->{NAME} - Error while reading $hash->{CONFIGURATION}: '$ret' "); - return $ret; - } - - # set used COLUMNS - InternalTimer(gettimeofday()+2, "DbLog_setinternalcols", $hash, 0); - - readingsSingleUpdate($hash, 'state', 'waiting for connection', 1); - DbLog_ConnectPush($hash); - - # initial execution of DbLog_execmemcache - DbLog_execmemcache($hash); - -return undef; -} - -################################################################ -sub DbLog_Undef($$) { - my ($hash, $name) = @_; - my $dbh= $hash->{DBHP}; - BlockingKill($hash->{HELPER}{".RUNNING_PID"}) if($hash->{HELPER}{".RUNNING_PID"}); - BlockingKill($hash->{HELPER}{REDUCELOG_PID}) if($hash->{HELPER}{REDUCELOG_PID}); - BlockingKill($hash->{HELPER}{COUNT_PID}) if($hash->{HELPER}{COUNT_PID}); - BlockingKill($hash->{HELPER}{DELDAYS_PID}) if($hash->{HELPER}{DELDAYS_PID}); - $dbh->disconnect() if(defined($dbh)); - RemoveInternalTimer($hash); - -return undef; -} - -################################################################ -sub DbLog_Shutdown($) { - my ($hash) = @_; - my $name = $hash->{NAME}; - - $hash->{HELPER}{SHUTDOWNSEQ} = 1; - DbLog_execmemcache($hash); - my $shutdownWait = AttrVal($name,"shutdownWait",undef); - if(defined($shutdownWait)) { - Log3($name, 2, "DbLog $name - waiting for shutdown $shutdownWait seconds ..."); - sleep($shutdownWait); - Log3($name, 2, "DbLog $name - continuing shutdown sequence"); - } - if($hash->{HELPER}{".RUNNING_PID"}) { - BlockingKill($hash->{HELPER}{".RUNNING_PID"}); - } - -return undef; -} - -################################################################ -# -# Wird bei jeder Aenderung eines Attributes dieser -# DbLog-Instanz aufgerufen -# -################################################################ -sub DbLog_Attr(@) { - my($cmd,$name,$aName,$aVal) = @_; - # my @a = @_; - my $hash = $defs{$name}; - my $dbh = $hash->{DBHP}; - my $do = 0; - - if($cmd eq "set") { - if ($aName eq "syncInterval" || $aName eq "cacheLimit" || $aName eq "timeout") { - unless ($aVal =~ /^[0-9]+$/) { return " The Value of $aName is not valid. Use only figures 0-9 !";} - } - if( $aName eq 'valueFn' ) { - my %specials= ( - "%TIMESTAMP" => $name, - "%DEVICE" => $name, - "%DEVICETYPE" => $name, - "%EVENT" => $name, - "%READING" => $name, - "%VALUE" => $name, - "%UNIT" => $name, - "%IGNORE" => $name, - "%CN" => $name, - ); - my $err = perlSyntaxCheck($aVal, %specials); - return $err if($err); - } - } - - if($aName eq "colEvent" || $aName eq "colReading" || $aName eq "colValue") { - if ($cmd eq "set" && $aVal) { - unless ($aVal =~ /^[0-9]+$/) { return " The Value of $aName is not valid. Use only figures 0-9 !";} - } - InternalTimer(gettimeofday()+0.5, "DbLog_setinternalcols", $hash, 0); - } - - if($aName eq "asyncMode") { - if ($cmd eq "set" && $aVal) { - $hash->{MODE} = "asynchronous"; - InternalTimer(gettimeofday()+2, "DbLog_execmemcache", $hash, 0); - } else { - $hash->{MODE} = "synchronous"; - delete($defs{$name}{READINGS}{NextSync}); - delete($defs{$name}{READINGS}{CacheUsage}); - InternalTimer(gettimeofday()+5, "DbLog_execmemcache", $hash, 0); - } - } - - if($aName eq "commitMode") { - if ($dbh) { - $dbh->commit() if(!$dbh->{AutoCommit}); - $dbh->disconnect(); - } - } - - if($aName eq "showproctime") { - if ($cmd ne "set" || !$aVal) { - delete($defs{$name}{READINGS}{background_processing_time}); - delete($defs{$name}{READINGS}{sql_processing_time}); - } - } - - if($aName eq "showNotifyTime") { - if ($cmd ne "set" || !$aVal) { - delete($defs{$name}{READINGS}{notify_processing_time}); - } - } - - if($aName eq "noNotifyDev") { - my $regexp = $hash->{REGEXP}; - if ($cmd eq "set" && $aVal) { - delete($hash->{NOTIFYDEV}); - } else { - notifyRegexpChanged($hash, $regexp); - } - } - - if ($aName eq "disable") { - my $async = AttrVal($name, "asyncMode", 0); - if($cmd eq "set") { - $do = ($aVal) ? 1 : 0; - } - $do = 0 if($cmd eq "del"); - my $val = ($do == 1 ? "disabled" : "active"); - - # letzter CacheSync vor disablen - DbLog_execmemcache($hash) if($do == 1); - - readingsSingleUpdate($hash, "state", $val, 1); - $hash->{HELPER}{OLDSTATE} = $val; - - if ($do == 0) { - InternalTimer(gettimeofday()+2, "DbLog_execmemcache", $hash, 0) if($async); - InternalTimer(gettimeofday()+2, "DbLog_ConnectPush", $hash, 0) if(!$async); - } - } - -return undef; -} - -################################################################ -sub DbLog_Set($@) { - my ($hash, @a) = @_; - my $name = $hash->{NAME}; - my $async = AttrVal($name, "asyncMode", undef); - my $usage = "Unknown argument, choose one of reduceLog reduceLogNbl reopen rereadcfg:noArg count:noArg countNbl:noArg - deleteOldDays deleteOldDaysNbl userCommand clearReadings:noArg - eraseReadings:noArg addLog "; - $usage .= "listCache:noArg addCacheLine purgeCache:noArg commitCache:noArg exportCache:nopurge,purgecache " if (AttrVal($name, "asyncMode", undef)); - $usage .= "configCheck:noArg "; - my (@logs,$dir); - - if (!AttrVal($name,"expimpdir",undef)) { - $dir = $attr{global}{modpath}."/log/"; - } else { - $dir = AttrVal($name,"expimpdir",undef); - } - - opendir(DIR,$dir); - my $sd = "cache_".$name."_"; - while (my $file = readdir(DIR)) { - next unless (-f "$dir/$file"); - next unless ($file =~ /^$sd/); - push @logs,$file; - } - closedir(DIR); - my $cj = join(",",reverse(sort @logs)) if (@logs); - - if (@logs) { - $usage .= "importCachefile:".$cj." "; - } else { - $usage .= "importCachefile "; - } - - return $usage if(int(@a) < 2); - my $dbh = $hash->{DBHP}; - my $db = (split(/;|=/, $hash->{dbconn}))[1]; - my $ret; - - if ($a[1] eq 'reduceLog') { - my ($od,$nd) = split(":",$a[2]); # $od - Tage älter als , $nd - Tage neuer als - if ($nd && $nd <= $od) {return "The second day value must be greater than the first one ! ";} - if (defined($a[3]) && $a[3] !~ /^average$|^average=.+|^EXCLUDE=.+$|^INCLUDE=.+$/i) { - return "ReduceLog syntax error in set command. Please see commandref for help."; - } - if (defined $a[2] && $a[2] =~ /(^\d+$)|(^\d+:\d+$)/) { - $ret = DbLog_reduceLog($hash,@a); - InternalTimer(gettimeofday()+5, "DbLog_execmemcache", $hash, 0); - } else { - Log3($name, 1, "DbLog $name: reduceLog error, no given."); - $ret = "reduceLog error, no given."; - } - } - elsif ($a[1] eq 'reduceLogNbl') { - my ($od,$nd) = split(":",$a[2]); # $od - Tage älter als , $nd - Tage neuer als - if ($nd && $nd <= $od) {return "The second day value must be greater than the first one ! ";} - if (defined($a[3]) && $a[3] !~ /^average$|^average=.+|^EXCLUDE=.+$|^INCLUDE=.+$/i) { - return "ReduceLogNbl syntax error in set command. Please see commandref for help."; - } - if (defined $a[2] && $a[2] =~ /(^\d+$)|(^\d+:\d+$)/) { - if ($hash->{HELPER}{REDUCELOG_PID} && $hash->{HELPER}{REDUCELOG_PID}{pid} !~ m/DEAD/) { - $ret = "reduceLogNbl already in progress. Please wait for the current process to finish."; - } else { - delete $hash->{HELPER}{REDUCELOG_PID}; - my @b = @a; - shift(@b); - readingsSingleUpdate($hash,"reduceLogState","@b started",1); - $hash->{HELPER}{REDUCELOG} = \@a; - $hash->{HELPER}{REDUCELOG_PID} = BlockingCall("DbLog_reduceLogNbl","$name","DbLog_reduceLogNbl_finished"); - return; - } - } else { - Log3($name, 1, "DbLog $name: reduceLogNbl syntax error, no [:] given."); - $ret = "reduceLogNbl error, no given."; - } - } - elsif ($a[1] eq 'clearReadings') { - my @allrds = keys%{$defs{$name}{READINGS}}; - foreach my $key(@allrds) { - next if($key =~ m/state/ || $key =~ m/CacheUsage/ || $key =~ m/NextSync/); - readingsSingleUpdate($hash,$key," ",0); - } - } - elsif ($a[1] eq 'eraseReadings') { - my @allrds = keys%{$defs{$name}{READINGS}}; - foreach my $key(@allrds) { - delete($defs{$name}{READINGS}{$key}) if($key !~ m/^state$/); - } - } - elsif ($a[1] eq 'addLog') { - unless ($a[2]) { return "The argument of $a[1] is not valid. Please check commandref.";} - my $nce = ("\!useExcludes" ~~ @a)?1:0; - map(s/\!useExcludes//g, @a); - my $cn; - if(/CN=/ ~~ @a) { - my $t = join(" ",@a); - ($cn) = ($t =~ /^.*CN=(\w+).*$/); - map(s/CN=$cn//g, @a); - } - DbLog_AddLog($hash,$a[2],$a[3],$nce,$cn); - my $skip_trigger = 1; # kein Event erzeugen falls addLog device/reading not found aber Abarbeitung erfolgreich - return undef,$skip_trigger; - } - elsif ($a[1] eq 'reopen') { - if ($dbh) { - $dbh->commit() if(!$dbh->{AutoCommit}); - $dbh->disconnect(); - } - if (!$a[2]) { - Log3($name, 3, "DbLog $name: Reopen requested."); - DbLog_ConnectPush($hash); - if($hash->{HELPER}{REOPEN_RUNS}) { - delete $hash->{HELPER}{REOPEN_RUNS}; - delete $hash->{HELPER}{REOPEN_RUNS_UNTIL}; - RemoveInternalTimer($hash, "reopen"); - } - DbLog_execmemcache($hash) if($async); - $ret = "Reopen executed."; - } else { - unless ($a[2] =~ /^[0-9]+$/) { return " The Value of $a[1]-time is not valid. Use only figures 0-9 !";} - # Statusbit "Kein Schreiben in DB erlauben" wenn reopen mit Zeitangabe - $hash->{HELPER}{REOPEN_RUNS} = $a[2]; - - # falls ein hängender Prozess vorhanden ist -> löschen - BlockingKill($hash->{HELPER}{".RUNNING_PID"}) if($hash->{HELPER}{".RUNNING_PID"}); - BlockingKill($hash->{HELPER}{REDUCELOG_PID}) if($hash->{HELPER}{REDUCELOG_PID}); - BlockingKill($hash->{HELPER}{COUNT_PID}) if($hash->{HELPER}{COUNT_PID}); - BlockingKill($hash->{HELPER}{DELDAYS_PID}) if($hash->{HELPER}{DELDAYS_PID}); - delete $hash->{HELPER}{".RUNNING_PID"}; - delete $hash->{HELPER}{COUNT_PID}; - delete $hash->{HELPER}{DELDAYS_PID}; - delete $hash->{HELPER}{REDUCELOG_PID}; - - my $ts = (split(" ",FmtDateTime(gettimeofday()+$a[2])))[1]; - Log3($name, 2, "DbLog $name: Connection closed until $ts ($a[2] seconds)."); - readingsSingleUpdate($hash, "state", "closed until $ts ($a[2] seconds)", 1); - InternalTimer(gettimeofday()+$a[2], "DbLog_reopen", $hash, 0); - $hash->{HELPER}{REOPEN_RUNS_UNTIL} = $ts; - } - } - elsif ($a[1] eq 'rereadcfg') { - Log3($name, 3, "DbLog $name: Rereadcfg requested."); - - if ($dbh) { - $dbh->commit() if(!$dbh->{AutoCommit}); - $dbh->disconnect(); - } - $ret = DbLog_readCfg($hash); - return $ret if $ret; - DbLog_ConnectPush($hash); - $ret = "Rereadcfg executed."; - } - elsif ($a[1] eq 'purgeCache') { - delete $hash->{cache}; - readingsSingleUpdate($hash, 'CacheUsage', 0, 1); - } - elsif ($a[1] eq 'commitCache') { - DbLog_execmemcache($hash); - } - elsif ($a[1] eq 'listCache') { - my $cache; - foreach my $key (sort{$a <=>$b}keys%{$hash->{cache}{".memcache"}}) { - $cache .= $key." => ".$hash->{cache}{".memcache"}{$key}."\n"; - } - return $cache; - } - elsif ($a[1] eq 'addCacheLine') { - if(!$a[2]) { - return "Syntax error in set $a[1] command. Use this line format: YYYY-MM-DD HH:MM:SS||||||[] "; - } - my @b = @a; - shift @b; - shift @b; - my $aa; - foreach my $k (@b) { - $aa .= "$k "; - } - chop($aa); #letztes Leerzeichen entfernen - $aa = DbLog_charfilter($aa) if(AttrVal($name, "useCharfilter",0)); - - my ($i_timestamp, $i_dev, $i_type, $i_evt, $i_reading, $i_val, $i_unit) = split("\\|",$aa); - if($i_timestamp !~ /^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})$/ || !$i_dev || !$i_reading) { - return "Syntax error in set $a[1] command. Use this line format: YYYY-MM-DD HH:MM:SS||||||[] "; - } - my ($yyyy, $mm, $dd, $hh, $min, $sec) = ($i_timestamp =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); - eval { my $ts = timelocal($sec, $min, $hh, $dd, $mm-1, $yyyy-1900); }; - - if ($@) { - my @l = split (/at/, $@); - return " Timestamp is out of range - $l[0]"; - } - DbLog_addCacheLine($hash,$i_timestamp,$i_dev,$i_type,$i_evt,$i_reading,$i_val,$i_unit); - - } - elsif ($a[1] eq 'configCheck') { - my $check = DbLog_configcheck($hash); - return $check; - } - elsif ($a[1] eq 'exportCache') { - my $cln; - my $crows = 0; - my ($out,$outfile,$error); - my $now = strftime('%Y-%m-%d_%H-%M-%S',localtime); - - # return wenn "reopen" mit Ablaufzeit gestartet ist oder disabled, nicht im asynch-Mode - return if(IsDisabled($name) || $hash->{HELPER}{REOPEN_RUNS}); - return if(!AttrVal($name, "asyncMode", undef)); - - if(@logs && AttrVal($name, "exportCacheAppend", 0)) { - # exportiertes Cachefile existiert und es soll an das neueste angehängt werden - $outfile = $dir.pop(@logs); - $out = ">>$outfile"; - } else { - $outfile = $dir."cache_".$name."_".$now; - $out = ">$outfile"; - } - if(open(FH, $out)) { - binmode (FH); - } else { - readingsSingleUpdate($hash, "lastCachefile", $outfile." - Error - ".$!, 1); - $error = "could not open ".$outfile.": ".$!; - } - - if(!$error) { - foreach my $key (sort(keys%{$hash->{cache}{".memcache"}})) { - $cln = $hash->{cache}{".memcache"}{$key}."\n"; - print FH $cln ; - $crows++; - } - close(FH); - readingsSingleUpdate($hash, "lastCachefile", $outfile." (".$crows." cache rows exported)", 1); - } - - # readingsSingleUpdate($hash, "state", $crows." cache rows exported to ".$outfile, 1); - - my $state = $error?$error:(IsDisabled($name))?"disabled":"connected"; - my $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - - Log3($name, 3, "DbLog $name: $crows cache rows exported to $outfile."); - - if (lc($a[-1]) =~ m/^purgecache/i) { - delete $hash->{cache}; - readingsSingleUpdate($hash, 'CacheUsage', 0, 1); - Log3($name, 3, "DbLog $name: Cache purged after exporting rows to $outfile."); - } - return; - } - elsif ($a[1] eq 'importCachefile') { - my $cln; - my $crows = 0; - my $infile; - my @row_array; - readingsSingleUpdate($hash, "lastCachefile", "", 0); - - # return wenn "reopen" mit Ablaufzeit gestartet ist oder disabled - return if(IsDisabled($name) || $hash->{HELPER}{REOPEN_RUNS}); - - if (!$a[2]) { - return "Wrong function-call. Use set importCachefile without directory (see attr expimpdir)." ; - } else { - $infile = $dir.$a[2]; - } - - if (open(FH, "$infile")) { - binmode (FH); - } else { - return "could not open ".$infile.": ".$!; - } - while () { - my $row = $_; - $row = DbLog_charfilter($row) if(AttrVal($name, "useCharfilter",0)); - push(@row_array, $row); - $crows++; - } - close(FH); - - if(@row_array) { - my $error = DbLog_Push($hash, 1, @row_array); - if($error) { - readingsSingleUpdate($hash, "lastCachefile", $infile." - Error - ".$!, 1); - readingsSingleUpdate($hash, "state", $error, 1); - Log3 $name, 5, "DbLog $name -> DbLog_Push Returncode: $error"; - } else { - unless(rename($dir.$a[2], $dir."impdone_".$a[2])) { - Log3($name, 2, "DbLog $name: cachefile $infile couldn't be renamed after import !"); - } - readingsSingleUpdate($hash, "lastCachefile", $infile." import successful", 1); - readingsSingleUpdate($hash, "state", $crows." cache rows processed from ".$infile, 1); - Log3($name, 3, "DbLog $name: $crows cache rows processed from $infile."); - } - } else { - readingsSingleUpdate($hash, "state", "no rows in ".$infile, 1); - Log3($name, 3, "DbLog $name: $infile doesn't contain any rows - no imports done."); - } - - return; - } - elsif ($a[1] eq 'count') { - $dbh = DbLog_ConnectNewDBH($hash); - if(!$dbh) { - Log3($name, 1, "DbLog $name: DBLog_Set - count - DB connect not possible"); - return; - } else { - Log3($name, 4, "DbLog $name: Records count requested."); - my $c = $dbh->selectrow_array('SELECT count(*) FROM history'); - readingsSingleUpdate($hash, 'countHistory', $c ,1); - $c = $dbh->selectrow_array('SELECT count(*) FROM current'); - readingsSingleUpdate($hash, 'countCurrent', $c ,1); - $dbh->disconnect(); - - InternalTimer(gettimeofday()+5, "DbLog_execmemcache", $hash, 0); - } - } - elsif ($a[1] eq 'countNbl') { - if ($hash->{HELPER}{COUNT_PID} && $hash->{HELPER}{COUNT_PID}{pid} !~ m/DEAD/){ - $ret = "DbLog count already in progress. Please wait for the current process to finish."; - } else { - delete $hash->{HELPER}{COUNT_PID}; - $hash->{HELPER}{COUNT_PID} = BlockingCall("DbLog_countNbl","$name","DbLog_countNbl_finished"); - return; - } - } - elsif ($a[1] eq 'deleteOldDays') { - Log3 ($name, 3, "DbLog $name -> Deletion of records older than $a[2] days in database $db requested"); - my ($c, $cmd); - - $dbh = DbLog_ConnectNewDBH($hash); - if(!$dbh) { - Log3($name, 1, "DbLog $name: DBLog_Set - deleteOldDays - DB connect not possible"); - return; - } else { - $cmd = "delete from history where TIMESTAMP < "; - - if ($hash->{MODEL} eq 'SQLITE') { $cmd .= "datetime('now', '-$a[2] days')"; } - elsif ($hash->{MODEL} eq 'MYSQL') { $cmd .= "DATE_SUB(CURDATE(),INTERVAL $a[2] DAY)"; } - elsif ($hash->{MODEL} eq 'POSTGRESQL') { $cmd .= "NOW() - INTERVAL '$a[2]' DAY"; } - else { $cmd = undef; $ret = 'Unknown database type. Maybe you can try userCommand anyway.'; } - - if(defined($cmd)) { - $c = $dbh->do($cmd); - $c = 0 if($c == 0E0); - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - $dbh->disconnect(); - Log3 ($name, 3, "DbLog $name -> deleteOldDays finished. $c entries of database $db deleted."); - readingsSingleUpdate($hash, 'lastRowsDeleted', $c ,1); - } - - InternalTimer(gettimeofday()+5, "DbLog_execmemcache", $hash, 0); - } - } - elsif ($a[1] eq 'deleteOldDaysNbl') { - if (defined $a[2] && $a[2] =~ /^\d+$/) { - if ($hash->{HELPER}{DELDAYS_PID} && $hash->{HELPER}{DELDAYS_PID}{pid} !~ m/DEAD/) { - $ret = "deleteOldDaysNbl already in progress. Please wait for the current process to finish."; - } else { - delete $hash->{HELPER}{DELDAYS_PID}; - $hash->{HELPER}{DELDAYS} = $a[2]; - Log3 ($name, 3, "DbLog $name -> Deletion of records older than $a[2] days in database $db requested"); - $hash->{HELPER}{DELDAYS_PID} = BlockingCall("DbLog_deldaysNbl","$name","DbLog_deldaysNbl_done"); - return; - } - } else { - Log3($name, 1, "DbLog $name: deleteOldDaysNbl error, no given."); - $ret = "deleteOldDaysNbl error, no given."; - } - } - elsif ($a[1] eq 'userCommand') { - $dbh = DbLog_ConnectNewDBH($hash); - if(!$dbh) { - Log3($name, 1, "DbLog $name: DBLog_Set - userCommand - DB connect not possible"); - return; - } else { - Log3($name, 4, "DbLog $name: userCommand execution requested."); - my ($c, @cmd, $sql); - @cmd = @a; - shift(@cmd); shift(@cmd); - $sql = join(" ",@cmd); - readingsSingleUpdate($hash, 'userCommand', $sql, 1); - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - my $error; - eval { $c = $dbh->selectrow_array($sql); }; - if($@) { - $error = $@; - Log3($name, 1, "DbLog $name: DBLog_Set - $error"); - } - - my $res = $error?$error:(defined($c))?$c:"no result"; - Log3($name, 4, "DbLog $name: DBLog_Set - userCommand - result: $res"); - readingsSingleUpdate($hash, 'userCommandResult', $res ,1); - $dbh->disconnect(); - - InternalTimer(gettimeofday()+5, "DbLog_execmemcache", $hash, 0); - } - } - else { $ret = $usage; } - -return $ret; -} - -############################################################################################### -# -# Exrahieren des Filters aus der ColumnsSpec (gplot-Datei) -# -# Die grundlegend idee ist das jeder svg plot einen filter hat der angibt -# welches device und reading dargestellt wird so das der plot sich neu -# lädt wenn es ein entsprechendes event gibt. -# -# Parameter: Quell-Instanz-Name, und alle FileLog-Parameter, die diese Instanz betreffen. -# Quelle: http://forum.fhem.de/index.php/topic,40176.msg325200.html#msg325200 -############################################################################################### -sub DbLog_regexpFn($$) { - my ($name, $filter) = @_; - my $ret; - - my @a = split( ' ', $filter ); - for(my $i = 0; $i < int(@a); $i++) { - my @fld = split(":", $a[$i]); - - $ret .= '|' if( $ret ); - no warnings 'uninitialized'; # Forum:74690, bug unitialized - $ret .= $fld[0] .'.'. $fld[1]; - use warnings; - } - -return $ret; -} - -################################################################ -# -# Parsefunktion, abhaengig vom Devicetyp -# -################################################################ -sub DbLog_ParseEvent($$$) -{ - my ($device, $type, $event)= @_; - my @result; - my $reading; - my $value; - my $unit; - - # Splitfunktion der Eventquelle aufrufen (ab 2.9.1) - ($reading, $value, $unit) = CallInstanceFn($device, "DbLog_splitFn", $event, $device); - # undef bedeutet, Modul stellt keine DbLog_splitFn bereit - if($reading) { - return ($reading, $value, $unit); - } - - # split the event into reading, value and unit - # "day-temp: 22.0 (Celsius)" -> "day-temp", "22.0 (Celsius)" - my @parts = split(/: /,$event); - $reading = shift @parts; - if(@parts == 2) { - $value = $parts[0]; - $unit = $parts[1]; - } else { - $value = join(": ", @parts); - $unit = ""; - } - - #default - if(!defined($reading)) { $reading = ""; } - if(!defined($value)) { $value = ""; } - if( $value eq "" ) { - $reading= "state"; - $value= $event; - } - - #globales Abfangen von - # - temperature - # - humidity - if ($reading =~ m(^temperature)) { $unit= "°C"; } # wenn reading mit temperature beginnt - elsif($reading =~ m(^humidity)) { $unit= "%"; } - - # the interpretation of the argument depends on the device type - # EMEM, M232Counter, M232Voltage return plain numbers - if(($type eq "M232Voltage") || - ($type eq "M232Counter") || - ($type eq "EMEM")) { - } - #OneWire - elsif(($type eq "OWMULTI")) { - if(int(@parts)>1) { - $reading = "data"; - $value = $event; - } else { - @parts = split(/\|/, AttrVal($device, $reading."VUnit", "")); - $unit = $parts[1] if($parts[1]); - if(lc($reading) =~ m/temp/) { - $value=~ s/ \(Celsius\)//; - $value=~ s/([-\.\d]+).*/$1/; - $unit= "°C"; - } - elsif(lc($reading) =~ m/(humidity|vwc)/) { - $value=~ s/ \(\%\)//; - $unit= "%"; - } - } - } - # Onewire - elsif(($type eq "OWAD") || - ($type eq "OWSWITCH")) { - if(int(@parts)>1) { - $reading = "data"; - $value = $event; - } else { - @parts = split(/\|/, AttrVal($device, $reading."Unit", "")); - $unit = $parts[1] if($parts[1]); - } - } - - # ZWAVE - elsif ($type eq "ZWAVE") { - if ( $value=~/([-\.\d]+)\s([a-z].*)/i ) { - $value = $1; - $unit = $2; - } - } - - # FBDECT - elsif ($type eq "FBDECT") { - if ( $value=~/([\.\d]+)\s([a-z].*)/i ) { - $value = $1; - $unit = $2; - } - } - - # MAX - elsif(($type eq "MAX")) { - $unit= "°C" if(lc($reading) =~ m/temp/); - $unit= "%" if(lc($reading) eq "valveposition"); - } - - # FS20 - elsif(($type eq "FS20") || ($type eq "X10")) { - if($reading =~ m/^dim(\d+).*/o) { - $value = $1; - $reading= "dim"; - $unit= "%"; - } - elsif(!defined($value) || $value eq "") { - $value= $reading; - $reading= "data"; - } - } - - # FHT - elsif($type eq "FHT") { - if($reading =~ m(-from[12]\ ) || $reading =~ m(-to[12]\ )) { - @parts= split(/ /,$event); - $reading= $parts[0]; - $value= $parts[1]; - $unit= ""; - } - elsif($reading =~ m(-temp)) { $value=~ s/ \(Celsius\)//; $unit= "°C"; } - elsif($reading =~ m(temp-offset)) { $value=~ s/ \(Celsius\)//; $unit= "°C"; } - elsif($reading =~ m(^actuator[0-9]*)) { - if($value eq "lime-protection") { - $reading= "actuator-lime-protection"; - undef $value; - } - elsif($value =~ m(^offset:)) { - $reading= "actuator-offset"; - @parts= split(/: /,$value); - $value= $parts[1]; - if(defined $value) { - $value=~ s/%//; $value= $value*1.; $unit= "%"; - } - } - elsif($value =~ m(^unknown_)) { - @parts= split(/: /,$value); - $reading= "actuator-" . $parts[0]; - $value= $parts[1]; - if(defined $value) { - $value=~ s/%//; $value= $value*1.; $unit= "%"; - } - } - elsif($value =~ m(^synctime)) { - $reading= "actuator-synctime"; - undef $value; - } - elsif($value eq "test") { - $reading= "actuator-test"; - undef $value; - } - elsif($value eq "pair") { - $reading= "actuator-pair"; - undef $value; - } - else { - $value=~ s/%//; $value= $value*1.; $unit= "%"; - } - } - } - # KS300 - elsif($type eq "KS300") { - if($event =~ m(T:.*)) { $reading= "data"; $value= $event; } - elsif($event =~ m(avg_day)) { $reading= "data"; $value= $event; } - elsif($event =~ m(avg_month)) { $reading= "data"; $value= $event; } - elsif($reading eq "temperature") { $value=~ s/ \(Celsius\)//; $unit= "°C"; } - elsif($reading eq "wind") { $value=~ s/ \(km\/h\)//; $unit= "km/h"; } - elsif($reading eq "rain") { $value=~ s/ \(l\/m2\)//; $unit= "l/m2"; } - elsif($reading eq "rain_raw") { $value=~ s/ \(counter\)//; $unit= ""; } - elsif($reading eq "humidity") { $value=~ s/ \(\%\)//; $unit= "%"; } - elsif($reading eq "israining") { - $value=~ s/ \(yes\/no\)//; - $value=~ s/no/0/; - $value=~ s/yes/1/; - } - } - # HMS - elsif($type eq "HMS" || - $type eq "CUL_WS" || - $type eq "OWTHERM") { - if($event =~ m(T:.*)) { $reading= "data"; $value= $event; } - elsif($reading eq "temperature") { - $value=~ s/ \(Celsius\)//; - $value=~ s/([-\.\d]+).*/$1/; #OWTHERM - $unit= "°C"; - } - elsif($reading eq "humidity") { $value=~ s/ \(\%\)//; $unit= "%"; } - elsif($reading eq "battery") { - $value=~ s/ok/1/; - $value=~ s/replaced/1/; - $value=~ s/empty/0/; - } - } - # CUL_HM - elsif ($type eq "CUL_HM") { - # remove trailing % - $value=~ s/ \%$//; - } - - # BS - elsif($type eq "BS") { - if($event =~ m(brightness:.*)) { - @parts= split(/ /,$event); - $reading= "lux"; - $value= $parts[4]*1.; - $unit= "lux"; - } - } - - # RFXTRX Lighting - elsif($type eq "TRX_LIGHT") { - if($reading =~ m/^level (\d+)/) { - $value = $1; - $reading= "level"; - } - } - - # RFXTRX Sensors - elsif($type eq "TRX_WEATHER") { - if($reading eq "energy_current") { $value=~ s/ W//; } - elsif($reading eq "energy_total") { $value=~ s/ kWh//; } -# elsif($reading eq "temperature") {TODO} -# elsif($reading eq "temperature") {TODO - elsif($reading eq "battery") { - if ($value=~ m/(\d+)\%/) { - $value= $1; - } - else { - $value= ($value eq "ok"); - } - } - } - - # Weather - elsif($type eq "WEATHER") { - if($event =~ m(^wind_condition)) { - @parts= split(/ /,$event); # extract wind direction from event - if(defined $parts[0]) { - $reading = "wind_condition"; - $value= "$parts[1] $parts[2] $parts[3]"; - } - } - if($reading eq "wind_condition") { $unit= "km/h"; } - elsif($reading eq "wind_chill") { $unit= "°C"; } - elsif($reading eq "wind_direction") { $unit= ""; } - elsif($reading =~ m(^wind)) { $unit= "km/h"; } # wind, wind_speed - elsif($reading =~ m(^temperature)) { $unit= "°C"; } # wenn reading mit temperature beginnt - elsif($reading =~ m(^humidity)) { $unit= "%"; } - elsif($reading =~ m(^pressure)) { $unit= "hPa"; } - elsif($reading =~ m(^pressure_trend)) { $unit= ""; } - } - - # FHT8V - elsif($type eq "FHT8V") { - if($reading =~ m(valve)) { - @parts= split(/ /,$event); - $reading= $parts[0]; - $value= $parts[1]; - $unit= "%"; - } - } - - # Dummy - elsif($type eq "DUMMY") { - if( $value eq "" ) { - $reading= "data"; - $value= $event; - } - $unit= ""; - } - - @result= ($reading,$value,$unit); - return @result; -} - -################################################################################################################## -# -# Hauptroutine zum Loggen. Wird bei jedem Eventchange -# aufgerufen -# -################################################################################################################## -# Es werden nur die Events von Geräten verarbeitet die im Hash $hash->{NOTIFYDEV} gelistet sind (wenn definiert). -# Dadurch kann die Menge der Events verringert werden. In sub DbRep_Define angeben. -# Beispiele: -# $hash->{NOTIFYDEV} = "global"; -# $hash->{NOTIFYDEV} = "global,Definition_A,Definition_B"; - -sub DbLog_Log($$) { - # $hash is my entry, $dev_hash is the entry of the changed device - my ($hash, $dev_hash) = @_; - my $name = $hash->{NAME}; - my $dev_name = $dev_hash->{NAME}; - my $dev_type = uc($dev_hash->{TYPE}); - my $async = AttrVal($name, "asyncMode", undef); - my $clim = AttrVal($name, "cacheLimit", 500); - my $ce = AttrVal($name, "cacheEvents", 0); - my $net; - - return if(IsDisabled($name) || !$hash->{HELPER}{COLSET} || $init_done != 1); - - # Notify-Routine Startzeit - my $nst = [gettimeofday]; - - my $events = deviceEvents($dev_hash, AttrVal($name, "addStateEvent", 1)); - return if(!$events); - - my $max = int(@{$events}); - - # verbose4 Logs nur für Devices in Attr "verbose4Devs" - my $vb4show = 0; - my @vb4devs = split(",", AttrVal($name, "verbose4Devs", "")); - if (!@vb4devs) { - $vb4show = 1; - } else { - foreach (@vb4devs) { - if($dev_name =~ m/$_/i) { - $vb4show = 1; - last; - } - } - # Log3 $name, 5, "DbLog $name -> verbose 4 output of device $dev_name skipped due to attribute \"verbose4Devs\" restrictions" if(!$vb4show); - } - - if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}) { - Log3 $name, 4, "DbLog $name -> ################################################################"; - Log3 $name, 4, "DbLog $name -> ### start of new Logcycle ###"; - Log3 $name, 4, "DbLog $name -> ################################################################"; - Log3 $name, 4, "DbLog $name -> number of events received: $max for device: $dev_name"; - } - - my $re = $hash->{REGEXP}; - my @row_array; - my ($event,$reading,$value,$unit); - my $ts_0 = TimeNow(); # timestamp in SQL format YYYY-MM-DD hh:mm:ss - my $now = gettimeofday(); # get timestamp in seconds since epoch - my $DbLogExclude = AttrVal($dev_name, "DbLogExclude", undef); - my $DbLogInclude = AttrVal($dev_name, "DbLogInclude",undef); - my $DbLogSelectionMode = AttrVal($name, "DbLogSelectionMode","Exclude"); - my $value_fn = AttrVal( $name, "valueFn", "" ); - - # Funktion aus Attr valueFn validieren - if( $value_fn =~ m/^\s*(\{.*\})\s*$/s ) { - $value_fn = $1; - } else { - $value_fn = ''; - } - - #one Transaction - eval { - for (my $i = 0; $i < $max; $i++) { - my $next = 0; - my $event = $events->[$i]; - $event = "" if(!defined($event)); - $event = DbLog_charfilter($event) if(AttrVal($name, "useCharfilter",0)); - Log3 $name, 4, "DbLog $name -> check Device: $dev_name , Event: $event" if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}); - - if($dev_name =~ m/^$re$/ || "$dev_name:$event" =~ m/^$re$/ || $DbLogSelectionMode eq 'Include') { - my $timestamp = $ts_0; - $timestamp = $dev_hash->{CHANGETIME}[$i] if(defined($dev_hash->{CHANGETIME}[$i])); - $event =~ s/\|/_ESC_/g; # escape Pipe "|" - - my @r = DbLog_ParseEvent($dev_name, $dev_type, $event); - $reading = $r[0]; - $value = $r[1]; - $unit = $r[2]; - if(!defined $reading) {$reading = "";} - if(!defined $value) {$value = "";} - if(!defined $unit || $unit eq "") {$unit = AttrVal("$dev_name", "unit", "");} - - $unit = DbLog_charfilter($unit) if(AttrVal($name, "useCharfilter",0)); - - # Devices / Readings ausschließen durch Attribut "excludeDevs" - # attr excludeDevs [#],[#],[#] - my ($exc,@excldr,$ds,$rd,@exdvs); - $exc = AttrVal($name, "excludeDevs", ""); - if($exc) { - $exc =~ s/[\s\n]/,/g; - @excldr = split(",",$exc); - foreach my $excl (@excldr) { - ($ds,$rd) = split("#",$excl); - @exdvs = devspec2array($ds); - if(@exdvs) { - # Log3 $name, 3, "DbLog $name -> excludeDevs: @exdvs"; - foreach (@exdvs) { - if($rd) { - if("$dev_name:$reading" =~ m/^$_:$rd$/) { - Log3 $name, 4, "DbLog $name -> Device:Reading \"$dev_name:$reading\" global excluded from logging by attribute \"excludeDevs\" " if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}); - $next = 1; - } - } else { - if($dev_name =~ m/^$_$/) { - Log3 $name, 4, "DbLog $name -> Device \"$dev_name\" global excluded from logging by attribute \"excludeDevs\" " if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}); - $next = 1; - } - } - } - } - } - next if($next); - } - - Log3 $name, 5, "DbLog $name -> parsed Event: $dev_name , Event: $event" if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}); - Log3 $name, 5, "DbLog $name -> DbLogExclude of \"$dev_name\": $DbLogExclude" if($vb4show && !$hash->{HELPER}{".RUNNING_PID"} && $DbLogExclude); - Log3 $name, 5, "DbLog $name -> DbLogInclude of \"$dev_name\": $DbLogInclude" if($vb4show && !$hash->{HELPER}{".RUNNING_PID"} && $DbLogInclude); - - #Je nach DBLogSelectionMode muss das vorgegebene Ergebnis der Include-, bzw. Exclude-Pruefung - #entsprechend unterschiedlich vorbelegt sein. - #keine Readings loggen die in DbLogExclude explizit ausgeschlossen sind - my $DoIt = 0; - $DoIt = 1 if($DbLogSelectionMode =~ m/Exclude/ ); - - if($DbLogExclude && $DbLogSelectionMode =~ m/Exclude/) { - # Bsp: "(temperature|humidity):300 battery:3600" - my @v1 = split(/,/, $DbLogExclude); - - for (my $i=0; $i{NAME}}{Helper}{DBLOG}{$reading}{$hash->{NAME}}{TIME}; - my $lv = $defs{$dev_hash->{NAME}}{Helper}{DBLOG}{$reading}{$hash->{NAME}}{VALUE}; - $lt = 0 if(!$lt); - $lv = "" if(!$lv); - - if(($now-$lt < $v2[1]) && ($lv eq $value)) { - # innerhalb MinIntervall und LastValue=Value - $DoIt = 0; - } - } - } - } - - #Hier ggf. zusätzlich noch dbLogInclude pruefen, falls bereits durch DbLogExclude ausgeschlossen - #Im Endeffekt genau die gleiche Pruefung, wie fuer DBLogExclude, lediglich mit umgegkehrtem Ergebnis. - if($DoIt == 0) { - if($DbLogInclude && ($DbLogSelectionMode =~ m/Include/)) { - my @v1 = split(/,/, $DbLogInclude); - - for (my $i=0; $i{NAME}}{Helper}{DBLOG}{$reading}{$hash->{NAME}}{TIME}; - my $lv = $defs{$dev_hash->{NAME}}{Helper}{DBLOG}{$reading}{$hash->{NAME}}{VALUE}; - $lt = 0 if(!$lt); - $lv = "" if(!$lv); - - if(($now-$lt < $v2[1]) && ($lv eq $value)) { - # innerhalb MinIntervall und LastValue=Value - $DoIt = 0; - } - } - } - } - } - next if($DoIt == 0); - - if ($DoIt) { - $defs{$dev_name}{Helper}{DBLOG}{$reading}{$hash->{NAME}}{TIME} = $now; - $defs{$dev_name}{Helper}{DBLOG}{$reading}{$hash->{NAME}}{VALUE} = $value; - - # Anwender kann Feldwerte mit Funktion aus Attr valueFn verändern oder Datensatz-Log überspringen - if($value_fn ne '') { - my $TIMESTAMP = $timestamp; - my $DEVICE = $dev_name; - my $DEVICETYPE = $dev_type; - my $EVENT = $event; - my $READING = $reading; - my $VALUE = $value; - my $UNIT = $unit; - my $IGNORE = 0; - my $CN = " "; - - eval $value_fn; - Log3 $name, 2, "DbLog $name -> error valueFn: ".$@ if($@); - if($IGNORE) { - # aktueller Event wird nicht geloggt wenn $IGNORE=1 gesetzt in $value_fn - Log3 $hash->{NAME}, 4, "DbLog $name -> Event ignored by valueFn - TS: $timestamp, Device: $dev_name, Type: $dev_type, Event: $event, Reading: $reading, Value: $value, Unit: $unit" - if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}); - next; - } - - $timestamp = $TIMESTAMP if($TIMESTAMP =~ /(19[0-9][0-9]|2[0-9][0-9][0-9])-(0[1-9]|1[1-2])-(0[1-9]|1[0-9]|2[0-9]|3[0-1]) (0[0-9]|1[1-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])/); - $dev_name = $DEVICE if($DEVICE ne ''); - $dev_type = $DEVICETYPE if($DEVICETYPE ne ''); - $reading = $READING if($READING ne ''); - $value = $VALUE if(defined $VALUE); - $unit = $UNIT if(defined $UNIT); - } - - # Daten auf maximale Länge beschneiden - ($dev_name,$dev_type,$event,$reading,$value,$unit) = DbLog_cutCol($hash,$dev_name,$dev_type,$event,$reading,$value,$unit); - - my $row = ($timestamp."|".$dev_name."|".$dev_type."|".$event."|".$reading."|".$value."|".$unit); - Log3 $hash->{NAME}, 4, "DbLog $name -> added event - Timestamp: $timestamp, Device: $dev_name, Type: $dev_type, Event: $event, Reading: $reading, Value: $value, Unit: $unit" - if($vb4show && !$hash->{HELPER}{".RUNNING_PID"}); - - if($async) { - # asynchoner non-blocking Mode - # Cache & CacheIndex für Events zum asynchronen Schreiben in DB - $hash->{cache}{index}++; - my $index = $hash->{cache}{index}; - $hash->{cache}{".memcache"}{$index} = $row; - - my $memcount = $hash->{cache}{".memcache"}?scalar(keys%{$hash->{cache}{".memcache"}}):0; - if($ce == 1) { - readingsSingleUpdate($hash, "CacheUsage", $memcount, 1); - } else { - readingsSingleUpdate($hash, 'CacheUsage', $memcount, 0); - } - # asynchrone Schreibroutine aufrufen wenn Füllstand des Cache erreicht ist - if($memcount >= $clim) { - my $lmlr = $hash->{HELPER}{LASTLIMITRUNTIME}; - my $syncival = AttrVal($name, "syncInterval", 30); - if(!$lmlr || gettimeofday() > $lmlr+($syncival/2)) { - Log3 $hash->{NAME}, 4, "DbLog $name -> Number of cache entries reached cachelimit $clim - start database sync."; - DbLog_execmemcache($hash); - $hash->{HELPER}{LASTLIMITRUNTIME} = gettimeofday(); - } - } - # Notify-Routine Laufzeit ermitteln - $net = tv_interval($nst); - } else { - # synchoner Mode - push(@row_array, $row); - } - } - } - } - }; - if(!$async) { - if(@row_array) { - # synchoner Mode - # return wenn "reopen" mit Ablaufzeit gestartet ist - return if($hash->{HELPER}{REOPEN_RUNS}); - my $error = DbLog_Push($hash, $vb4show, @row_array); - Log3 $name, 5, "DbLog $name -> DbLog_Push Returncode: $error" if($vb4show); - - my $state = $error?$error:(IsDisabled($name))?"disabled":"connected"; - my $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - - # Notify-Routine Laufzeit ermitteln - $net = tv_interval($nst); - } - } - if($net && AttrVal($name, "showNotifyTime", undef)) { - readingsSingleUpdate($hash, "notify_processing_time", sprintf("%.4f",$net), 1); - } -return; -} - -################################################################################################# -# -# Schreibroutine Einfügen Werte in DB im Synchronmode -# -################################################################################################# -sub DbLog_Push(@) { - my ($hash, $vb4show, @row_array) = @_; - my $name = $hash->{NAME}; - my $DbLogType = AttrVal($name, "DbLogType", "History"); - my $supk = AttrVal($name, "noSupportPK", 0); - my $errorh = 0; - my $error = 0; - my $doins = 0; # Hilfsvariable, wenn "1" sollen inserts in Tabele current erfolgen (updates schlugen fehl) - my $dbh; - - my $nh = ($hash->{MODEL} ne 'SQLITE')?1:0; - # Unterscheidung $dbh um Abbrüche in Plots (SQLite) zu vermeiden und - # andererseite kein "MySQL-Server has gone away" Fehler - if ($nh) { - $dbh = DbLog_ConnectNewDBH($hash); - return if(!$dbh); - } else { - $dbh = $hash->{DBHP}; - eval { - if ( !$dbh || not $dbh->ping ) { - # DB Session dead, try to reopen now ! - DbLog_ConnectPush($hash,1); - } - }; - if ($@) { - Log3($name, 1, "DbLog $name: DBLog_Push - DB Session dead! - $@"); - return $@; - } else { - $dbh = $hash->{DBHP}; - } - } - - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - - my ($useac,$useta) = DbLog_commitMode($hash); - my $ac = ($dbh->{AutoCommit})?"ON":"OFF"; - my $tm = ($useta)?"ON":"OFF"; - - Log3 $name, 4, "DbLog $name -> ################################################################"; - Log3 $name, 4, "DbLog $name -> ### New database processing cycle - synchronous ###"; - Log3 $name, 4, "DbLog $name -> ################################################################"; - Log3 $name, 4, "DbLog $name -> DbLogType is: $DbLogType"; - Log3 $name, 4, "DbLog $name -> AutoCommit mode: $ac, Transaction mode: $tm"; - - # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK - my ($usepkh,$usepkc,$pkh,$pkc); - if (!$supk) { - ($usepkh,$usepkc,$pkh,$pkc) = DbLog_checkUsePK($hash,$dbh); - } else { - Log3 $hash->{NAME}, 5, "DbLog $name -> Primary Key usage suppressed by attribute noSupportPK"; - } - - my (@timestamp,@device,@type,@event,@reading,@value,@unit); - my (@timestamp_cur,@device_cur,@type_cur,@event_cur,@reading_cur,@value_cur,@unit_cur); - my ($sth_ih,$sth_ic,$sth_uc); - no warnings 'uninitialized'; - - my $ceti = $#row_array+1; - - foreach my $row (@row_array) { - my @a = split("\\|",$row); - s/_ESC_/\|/g for @a; # escaped Pipe return to "|" - push(@timestamp, "$a[0]"); - push(@device, "$a[1]"); - push(@type, "$a[2]"); - push(@event, "$a[3]"); - push(@reading, "$a[4]"); - push(@value, "$a[5]"); - push(@unit, "$a[6]"); - Log3 $hash->{NAME}, 4, "DbLog $name -> processing event Timestamp: $a[0], Device: $a[1], Type: $a[2], Event: $a[3], Reading: $a[4], Value: $a[5], Unit: $a[6]" - if($vb4show); - } - use warnings; - - if (lc($DbLogType) =~ m(history)) { - # insert history mit/ohne primary key - if ($usepkh && $hash->{MODEL} eq 'MYSQL') { - eval { $sth_ih = $dbh->prepare("INSERT IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkh && $hash->{MODEL} eq 'SQLITE') { - eval { $sth_ih = $dbh->prepare("INSERT OR IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkh && $hash->{MODEL} eq 'POSTGRESQL') { - eval { $sth_ih = $dbh->prepare("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; - } else { - # old behavior - eval { $sth_ih = $dbh->prepare("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } - if ($@) { - return $@; - } - $sth_ih->bind_param_array(1, [@timestamp]); - $sth_ih->bind_param_array(2, [@device]); - $sth_ih->bind_param_array(3, [@type]); - $sth_ih->bind_param_array(4, [@event]); - $sth_ih->bind_param_array(5, [@reading]); - $sth_ih->bind_param_array(6, [@value]); - $sth_ih->bind_param_array(7, [@unit]); - } - - if (lc($DbLogType) =~ m(current) ) { - # insert current mit/ohne primary key, insert-values für current werden generiert - if ($usepkc && $hash->{MODEL} eq 'MYSQL') { - eval { $sth_ic = $dbh->prepare("INSERT IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkc && $hash->{MODEL} eq 'SQLITE') { - eval { $sth_ic = $dbh->prepare("INSERT OR IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkc && $hash->{MODEL} eq 'POSTGRESQL') { - eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; - } else { - # old behavior - eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } - if ($@) { - return $@; - } - if ($usepkc && $hash->{MODEL} eq 'MYSQL') { - # update current (mit PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@device]); - $sth_uc->bind_param_array(3, [@type]); - $sth_uc->bind_param_array(4, [@event]); - $sth_uc->bind_param_array(5, [@reading]); - $sth_uc->bind_param_array(6, [@value]); - $sth_uc->bind_param_array(7, [@unit]); - } elsif ($usepkc && $hash->{MODEL} eq 'SQLITE') { - # update current (mit PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("INSERT OR REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@device]); - $sth_uc->bind_param_array(3, [@type]); - $sth_uc->bind_param_array(4, [@event]); - $sth_uc->bind_param_array(5, [@reading]); - $sth_uc->bind_param_array(6, [@value]); - $sth_uc->bind_param_array(7, [@unit]); - } elsif ($usepkc && $hash->{MODEL} eq 'POSTGRESQL') { - # update current (mit PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT ($pkc) - DO UPDATE SET TIMESTAMP=EXCLUDED.TIMESTAMP, DEVICE=EXCLUDED.DEVICE, TYPE=EXCLUDED.TYPE, EVENT=EXCLUDED.EVENT, READING=EXCLUDED.READING, - VALUE=EXCLUDED.VALUE, UNIT=EXCLUDED.UNIT"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@device]); - $sth_uc->bind_param_array(3, [@type]); - $sth_uc->bind_param_array(4, [@event]); - $sth_uc->bind_param_array(5, [@reading]); - $sth_uc->bind_param_array(6, [@value]); - $sth_uc->bind_param_array(7, [@unit]); - } else { - # for update current (ohne PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("UPDATE current SET TIMESTAMP=?, TYPE=?, EVENT=?, VALUE=?, UNIT=? WHERE (DEVICE=?) AND (READING=?)"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@type]); - $sth_uc->bind_param_array(3, [@event]); - $sth_uc->bind_param_array(4, [@value]); - $sth_uc->bind_param_array(5, [@unit]); - $sth_uc->bind_param_array(6, [@device]); - $sth_uc->bind_param_array(7, [@reading]); - } - } - - my ($tuples, $rows); - - # insert into history-Tabelle - eval { $dbh->begin_work() if($useta && $dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein - if ($@) { - Log3($name, 2, "DbLog $name -> Error start transaction for history - $@"); - } - eval { - if (lc($DbLogType) =~ m(history) ) { - ($tuples, $rows) = $sth_ih->execute_array( { ArrayTupleStatus => \my @tuple_status } ); - my $nins_hist = 0; - for my $tuple (0..$#row_array) { - my $status = $tuple_status[$tuple]; - $status = 0 if($status eq "0E0"); - next if($status); # $status ist "1" wenn insert ok - Log3 $hash->{NAME}, 3, "DbLog $name -> Insert into history rejected".($usepkh?" (possible PK violation) ":" ")."- TS: $timestamp[$tuple], Device: $device[$tuple], Event: $event[$tuple]"; - $nins_hist++; - } - if(!$nins_hist) { - Log3 $hash->{NAME}, 4, "DbLog $name -> $ceti of $ceti events inserted into table history".($usepkh?" using PK on columns $pkh":""); - } else { - Log3 $hash->{NAME}, 4, "DbLog $name -> ".($ceti-$nins_hist)." of $ceti events inserted into table history".($usepkh?" using PK on columns $pkh":""); - } - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; # issue Turning on AutoCommit failed - if ($@) { - Log3($name, 2, "DbLog $name -> Error commit history - $@"); - } else { - if(!$dbh->{AutoCommit}) { - Log3($name, 4, "DbLog $name -> insert table history committed"); - } else { - Log3($name, 4, "DbLog $name -> insert table history committed by autocommit"); - } - } - } - }; - - if ($@) { - Log3 $hash->{NAME}, 2, "DbLog $name -> Error table history - $@"; - $errorh = $@; - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; # issue Turning on AutoCommit failed - if ($@) { - Log3($name, 2, "DbLog $name -> Error rollback history - $@"); - } else { - Log3($name, 4, "DbLog $name -> insert history rolled back"); - } - } - - # update or insert current - eval { $dbh->begin_work() if($useta && $dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein - if ($@) { - Log3($name, 2, "DbLog $name -> Error start transaction for history - $@"); - } - eval { - if (lc($DbLogType) =~ m(current) ) { - ($tuples, $rows) = $sth_uc->execute_array( { ArrayTupleStatus => \my @tuple_status } ); - # Log3 $hash->{NAME}, 2, "DbLog $name -> Rows: $rows, Ceti: $ceti"; - my $nupd_cur = 0; - for my $tuple (0..$#row_array) { - my $status = $tuple_status[$tuple]; - $status = 0 if($status eq "0E0"); - next if($status); # $status ist "1" wenn update ok - Log3 $hash->{NAME}, 4, "DbLog $name -> Failed to update in current, try to insert - TS: $timestamp[$tuple], Device: $device[$tuple], Reading: $reading[$tuple], Status = $status"; - push(@timestamp_cur, "$timestamp[$tuple]"); - push(@device_cur, "$device[$tuple]"); - push(@type_cur, "$type[$tuple]"); - push(@event_cur, "$event[$tuple]"); - push(@reading_cur, "$reading[$tuple]"); - push(@value_cur, "$value[$tuple]"); - push(@unit_cur, "$unit[$tuple]"); - $nupd_cur++; - } - if(!$nupd_cur) { - Log3 $hash->{NAME}, 4, "DbLog $name -> $ceti of $ceti events updated in table current".($usepkc?" using PK on columns $pkc":""); - } else { - Log3 $hash->{NAME}, 4, "DbLog $name -> $nupd_cur of $ceti events not updated and try to insert into table current".($usepkc?" using PK on columns $pkc":""); - $doins = 1; - } - - if ($doins) { - # events die nicht in Tabelle current updated wurden, werden in current neu eingefügt - $sth_ic->bind_param_array(1, [@timestamp_cur]); - $sth_ic->bind_param_array(2, [@device_cur]); - $sth_ic->bind_param_array(3, [@type_cur]); - $sth_ic->bind_param_array(4, [@event_cur]); - $sth_ic->bind_param_array(5, [@reading_cur]); - $sth_ic->bind_param_array(6, [@value_cur]); - $sth_ic->bind_param_array(7, [@unit_cur]); - - ($tuples, $rows) = $sth_ic->execute_array( { ArrayTupleStatus => \my @tuple_status } ); - my $nins_cur = 0; - for my $tuple (0..$#device_cur) { - my $status = $tuple_status[$tuple]; - $status = 0 if($status eq "0E0"); - next if($status); # $status ist "1" wenn insert ok - Log3 $hash->{NAME}, 3, "DbLog $name -> Insert into current rejected - TS: $timestamp[$tuple], Device: $device_cur[$tuple], Reading: $reading_cur[$tuple], Status = $status"; - $nins_cur++; - } - if(!$nins_cur) { - Log3 $hash->{NAME}, 4, "DbLog $name -> ".($#device_cur+1)." of ".($#device_cur+1)." events inserted into table current ".($usepkc?" using PK on columns $pkc":""); - } else { - Log3 $hash->{NAME}, 4, "DbLog $name -> ".($#device_cur+1-$nins_cur)." of ".($#device_cur+1)." events inserted into table current".($usepkc?" using PK on columns $pkc":""); - } - } - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; # issue Turning on AutoCommit failed - if ($@) { - Log3($name, 2, "DbLog $name -> Error commit table current - $@"); - } else { - if(!$dbh->{AutoCommit}) { - Log3($name, 4, "DbLog $name -> insert / update table current committed"); - } else { - Log3($name, 4, "DbLog $name -> insert / update table current committed by autocommit"); - } - } - } - }; - - if ($errorh) { - $error = $errorh; - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - $dbh->disconnect if ($nh); - -return Encode::encode_utf8($error); -} - -################################################################################################# -# -# MemCache auswerten und Schreibroutine asynchron und non-blocking aufrufen -# -################################################################################################# -sub DbLog_execmemcache ($) { - my ($hash) = @_; - my $name = $hash->{NAME}; - my $syncival = AttrVal($name, "syncInterval", 30); - my $clim = AttrVal($name, "cacheLimit", 500); - my $async = AttrVal($name, "asyncMode", undef); - my $ce = AttrVal($name, "cacheEvents", 0); - my $timeout = AttrVal($name, "timeout", 86400); - my $DbLogType = AttrVal($name, "DbLogType", "History"); - my $dbconn = $hash->{dbconn}; - my $dbuser = $hash->{dbuser}; - my $dbpassword = $attr{"sec$name"}{secret}; - my $dolog = 1; - my $error = 0; - my (@row_array,$memcount,$dbh); - - RemoveInternalTimer($hash, "DbLog_execmemcache"); - - if($init_done != 1) { - InternalTimer(gettimeofday()+5, "DbLog_execmemcache", $hash, 0); - return; - } - - # return wenn "reopen" mit Zeitangabe läuft, oder kein asynchroner Mode oder wenn disabled - if(!$async || IsDisabled($name) || $hash->{HELPER}{REOPEN_RUNS}) { - return; - } - - # tote PID's löschen - if($hash->{HELPER}{".RUNNING_PID"} && $hash->{HELPER}{".RUNNING_PID"}{pid} =~ m/DEAD/) { - delete $hash->{HELPER}{".RUNNING_PID"}; - } - if($hash->{HELPER}{REDUCELOG_PID} && $hash->{HELPER}{REDUCELOG_PID}{pid} =~ m/DEAD/) { - delete $hash->{HELPER}{REDUCELOG_PID}; - } - if($hash->{HELPER}{DELDAYS_PID} && $hash->{HELPER}{DELDAYS_PID}{pid} =~ m/DEAD/) { - delete $hash->{HELPER}{DELDAYS_PID}; - } - - # bei SQLite Sperrverwaltung Logging wenn andere schreibende Zugriffe laufen - if($hash->{MODEL} eq "SQLITE") { - if($hash->{HELPER}{DELDAYS_PID}) { - $error = "deleteOldDaysNbl is running - resync at NextSync"; - $dolog = 0; - } - if($hash->{HELPER}{REDUCELOG_PID}) { - $error = "reduceLogNbl is running - resync at NextSync"; - $dolog = 0; - } - if($hash->{HELPER}{".RUNNING_PID"}) { - $error = "Commit already running - resync at NextSync"; - $dolog = 0; - } - } - - $memcount = $hash->{cache}{".memcache"}?scalar(keys%{$hash->{cache}{".memcache"}}):0; - if($ce == 2) { - readingsSingleUpdate($hash, "CacheUsage", $memcount, 1); - } else { - readingsSingleUpdate($hash, 'CacheUsage', $memcount, 0); - } - - if($memcount && $dolog && !$hash->{HELPER}{".RUNNING_PID"}) { - Log3 $name, 4, "DbLog $name -> ################################################################"; - Log3 $name, 4, "DbLog $name -> ### New database processing cycle - asynchronous ###"; - Log3 $name, 4, "DbLog $name -> ################################################################"; - Log3 $name, 4, "DbLog $name -> MemCache contains $memcount entries to process"; - Log3 $name, 4, "DbLog $name -> DbLogType is: $DbLogType"; - - foreach my $key (sort(keys%{$hash->{cache}{".memcache"}})) { - Log3 $hash->{NAME}, 5, "DbLog $name -> MemCache contains: ".$hash->{cache}{".memcache"}{$key}; - push(@row_array, delete($hash->{cache}{".memcache"}{$key})); - } - - my $rowlist = join('§', @row_array); - $rowlist = encode_base64($rowlist,""); - $hash->{HELPER}{".RUNNING_PID"} = BlockingCall ( - "DbLog_PushAsync", - "$name|$rowlist", - "DbLog_PushAsyncDone", - $timeout, - "DbLog_PushAsyncAborted", - $hash ); - $hash->{HELPER}{".RUNNING_PID"}{loglevel} = 4; - Log3 $hash->{NAME}, 5, "DbLog $name -> DbLog_PushAsync called with timeout: $timeout"; - } else { - if($dolog && $hash->{HELPER}{".RUNNING_PID"}) { - $error = "Commit already running - resync at NextSync"; - } - } - - # $memcount = scalar(keys%{$hash->{cache}{".memcache"}}); - - my $nextsync = gettimeofday()+$syncival; - my $nsdt = FmtDateTime($nextsync); - - if(AttrVal($name, "syncEvents", undef)) { - readingsSingleUpdate($hash, "NextSync", $nsdt. " or if CacheUsage ".$clim." reached", 1); - } else { - readingsSingleUpdate($hash, "NextSync", $nsdt. " or if CacheUsage ".$clim." reached", 0); - } - - my $state = $error?$error:$hash->{HELPER}{OLDSTATE}; - my $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - - InternalTimer($nextsync, "DbLog_execmemcache", $hash, 0); - -return; -} - -################################################################################################# -# -# Schreibroutine Einfügen Werte in DB asynchron non-blocking -# -################################################################################################# -sub DbLog_PushAsync(@) { - my ($string) = @_; - my ($name,$rowlist) = split("\\|", $string); - my $hash = $defs{$name}; - my $dbconn = $hash->{dbconn}; - my $dbuser = $hash->{dbuser}; - my $dbpassword = $attr{"sec$name"}{secret}; - my $DbLogType = AttrVal($name, "DbLogType", "History"); - my $supk = AttrVal($name, "noSupportPK", 0); - my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; - my $errorh = 0; - my $error = 0; - my $doins = 0; # Hilfsvariable, wenn "1" sollen inserts in Tabelle current erfolgen (updates schlugen fehl) - my $dbh; - my $rowlback = 0; # Eventliste für Rückgabe wenn Fehler - - Log3 ($name, 5, "DbLog $name -> Start DbLog_PushAsync"); - Log3 ($name, 5, "DbLog $name -> DbLogType is: $DbLogType"); - - # Background-Startzeit - my $bst = [gettimeofday]; - - my ($useac,$useta) = DbLog_commitMode($hash); - if(!$useac) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 0, mysql_enable_utf8 => $utf8 });}; - } elsif($useac == 1) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, mysql_enable_utf8 => $utf8 });}; - } else { - # Server default - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, mysql_enable_utf8 => $utf8 });}; - } - if ($@) { - $error = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name - Error: $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_PushAsync finished"); - return "$name|$error|0|$rowlist"; - } - - my $ac = ($dbh->{AutoCommit})?"ON":"OFF"; - my $tm = ($useta)?"ON":"OFF"; - Log3 $hash->{NAME}, 4, "DbLog $name -> AutoCommit mode: $ac, Transaction mode: $tm"; - - # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK - my ($usepkh,$usepkc,$pkh,$pkc); - if (!$supk) { - ($usepkh,$usepkc,$pkh,$pkc) = DbLog_checkUsePK($hash,$dbh); - } else { - Log3 $hash->{NAME}, 5, "DbLog $name -> Primary Key usage suppressed by attribute noSupportPK"; - } - - my $rowldec = decode_base64($rowlist); - my @row_array = split('§', $rowldec); - - my (@timestamp,@device,@type,@event,@reading,@value,@unit); - my (@timestamp_cur,@device_cur,@type_cur,@event_cur,@reading_cur,@value_cur,@unit_cur); - my ($sth_ih,$sth_ic,$sth_uc); - no warnings 'uninitialized'; - - my $ceti = $#row_array+1; - - foreach my $row (@row_array) { - my @a = split("\\|",$row); - s/_ESC_/\|/g for @a; # escaped Pipe return to "|" - push(@timestamp, "$a[0]"); - push(@device, "$a[1]"); - push(@type, "$a[2]"); - push(@event, "$a[3]"); - push(@reading, "$a[4]"); - push(@value, "$a[5]"); - push(@unit, "$a[6]"); - Log3 $hash->{NAME}, 5, "DbLog $name -> processing event Timestamp: $a[0], Device: $a[1], Type: $a[2], Event: $a[3], Reading: $a[4], Value: $a[5], Unit: $a[6]"; - } - use warnings; - - if (lc($DbLogType) =~ m(history)) { - # insert history mit/ohne primary key - if ($usepkh && $hash->{MODEL} eq 'MYSQL') { - eval { $sth_ih = $dbh->prepare("INSERT IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkh && $hash->{MODEL} eq 'SQLITE') { - eval { $sth_ih = $dbh->prepare("INSERT OR IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkh && $hash->{MODEL} eq 'POSTGRESQL') { - eval { $sth_ih = $dbh->prepare("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; - } else { - # old behavior - eval { $sth_ih = $dbh->prepare("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } - if ($@) { - # Eventliste zurückgeben wenn z.B. disk I/O error bei SQLITE - $error = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name - Error: $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_PushAsync finished"); - $dbh->disconnect(); - return "$name|$error|0|$rowlist"; - } - $sth_ih->bind_param_array(1, [@timestamp]); - $sth_ih->bind_param_array(2, [@device]); - $sth_ih->bind_param_array(3, [@type]); - $sth_ih->bind_param_array(4, [@event]); - $sth_ih->bind_param_array(5, [@reading]); - $sth_ih->bind_param_array(6, [@value]); - $sth_ih->bind_param_array(7, [@unit]); - } - - if (lc($DbLogType) =~ m(current) ) { - # insert current mit/ohne primary key, insert-values für current werden generiert - if ($usepkc && $hash->{MODEL} eq 'MYSQL') { - eval { $sth_ic = $dbh->prepare("INSERT IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkc && $hash->{MODEL} eq 'SQLITE') { - eval { $sth_ic = $dbh->prepare("INSERT OR IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } elsif ($usepkc && $hash->{MODEL} eq 'POSTGRESQL') { - eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; - } else { - # old behavior - eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; - } - if ($@) { - # Eventliste zurückgeben wenn z.B. Disk I/O error bei SQLITE - $error = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name - Error: $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_PushAsync finished"); - $dbh->disconnect(); - return "$name|$error|0|$rowlist"; - } - if ($usepkc && $hash->{MODEL} eq 'MYSQL') { - # update current (mit PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@device]); - $sth_uc->bind_param_array(3, [@type]); - $sth_uc->bind_param_array(4, [@event]); - $sth_uc->bind_param_array(5, [@reading]); - $sth_uc->bind_param_array(6, [@value]); - $sth_uc->bind_param_array(7, [@unit]); - } elsif ($usepkc && $hash->{MODEL} eq 'SQLITE') { - # update current (mit PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("INSERT OR REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@device]); - $sth_uc->bind_param_array(3, [@type]); - $sth_uc->bind_param_array(4, [@event]); - $sth_uc->bind_param_array(5, [@reading]); - $sth_uc->bind_param_array(6, [@value]); - $sth_uc->bind_param_array(7, [@unit]); - } elsif ($usepkc && $hash->{MODEL} eq 'POSTGRESQL') { - # update current (mit PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT ($pkc) - DO UPDATE SET TIMESTAMP=EXCLUDED.TIMESTAMP, DEVICE=EXCLUDED.DEVICE, TYPE=EXCLUDED.TYPE, EVENT=EXCLUDED.EVENT, READING=EXCLUDED.READING, - VALUE=EXCLUDED.VALUE, UNIT=EXCLUDED.UNIT"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@device]); - $sth_uc->bind_param_array(3, [@type]); - $sth_uc->bind_param_array(4, [@event]); - $sth_uc->bind_param_array(5, [@reading]); - $sth_uc->bind_param_array(6, [@value]); - $sth_uc->bind_param_array(7, [@unit]); - } else { - # update current (ohne PK), insert-values für current wird generiert - $sth_uc = $dbh->prepare("UPDATE current SET TIMESTAMP=?, TYPE=?, EVENT=?, VALUE=?, UNIT=? WHERE (DEVICE=?) AND (READING=?)"); - $sth_uc->bind_param_array(1, [@timestamp]); - $sth_uc->bind_param_array(2, [@type]); - $sth_uc->bind_param_array(3, [@event]); - $sth_uc->bind_param_array(4, [@value]); - $sth_uc->bind_param_array(5, [@unit]); - $sth_uc->bind_param_array(6, [@device]); - $sth_uc->bind_param_array(7, [@reading]); - } - } - - # SQL-Startzeit - my $st = [gettimeofday]; - - my ($tuples, $rows); - - # insert into history - eval { $dbh->begin_work() if($useta && $dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein - if ($@) { - Log3($name, 2, "DbLog $name -> Error start transaction for history - $@"); - } - eval { - if (lc($DbLogType) =~ m(history) ) { - ($tuples, $rows) = $sth_ih->execute_array( { ArrayTupleStatus => \my @tuple_status } ); - my $nins_hist = 0; - my @n2hist; - for my $tuple (0..$#row_array) { - my $status = $tuple_status[$tuple]; - $status = 0 if($status eq "0E0"); - next if($status); # $status ist "1" wenn insert ok - Log3 $hash->{NAME}, 3, "DbLog $name -> Insert into history rejected".($usepkh?" (possible PK violation) ":" ")."- TS: $timestamp[$tuple], Device: $device[$tuple], Event: $event[$tuple]"; - my $nlh = ($timestamp[$tuple]."|".$device[$tuple]."|".$type[$tuple]."|".$event[$tuple]."|".$reading[$tuple]."|".$value[$tuple]."|".$unit[$tuple]); - push(@n2hist, "$nlh"); - $nins_hist++; - } - if(!$nins_hist) { - Log3 $hash->{NAME}, 4, "DbLog $name -> $ceti of $ceti events inserted into table history".($usepkh?" using PK on columns $pkh":""); - } else { - Log3 $hash->{NAME}, 4, "DbLog $name -> ".($ceti-$nins_hist)." of $ceti events inserted into table history".($usepkh?" using PK on columns $pkh":""); - s/\|/_ESC_/g for @n2hist; # escape Pipe "|" - $rowlist = join('§', @n2hist); - $rowlist = encode_base64($rowlist,""); - } - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; # issue Turning on AutoCommit failed - if ($@) { - Log3($name, 2, "DbLog $name -> Error commit history - $@"); - } else { - if(!$dbh->{AutoCommit}) { - Log3($name, 4, "DbLog $name -> insert table history committed"); - } else { - Log3($name, 4, "DbLog $name -> insert table history committed by autocommit"); - } - } - } - }; - - if ($@) { - $errorh = $@; - Log3 $hash->{NAME}, 2, "DbLog $name -> Error table history - $errorh"; - $error = encode_base64($errorh,""); - $rowlback = $rowlist if($useta); # nicht gespeicherte Datensätze nur zurück geben wenn Transaktion ein - } - - # update or insert current - eval { $dbh->begin_work() if($useta && $dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein - if ($@) { - Log3($name, 2, "DbLog $name -> Error start transaction for current - $@"); - } - eval { - if (lc($DbLogType) =~ m(current) ) { - ($tuples, $rows) = $sth_uc->execute_array( { ArrayTupleStatus => \my @tuple_status } ); - my $nupd_cur = 0; - for my $tuple (0..$#row_array) { - my $status = $tuple_status[$tuple]; - $status = 0 if($status eq "0E0"); - next if($status); # $status ist "1" wenn update ok - Log3 $hash->{NAME}, 4, "DbLog $name -> Failed to update in current, try to insert - TS: $timestamp[$tuple], Device: $device[$tuple], Reading: $reading[$tuple], Status = $status"; - push(@timestamp_cur, "$timestamp[$tuple]"); - push(@device_cur, "$device[$tuple]"); - push(@type_cur, "$type[$tuple]"); - push(@event_cur, "$event[$tuple]"); - push(@reading_cur, "$reading[$tuple]"); - push(@value_cur, "$value[$tuple]"); - push(@unit_cur, "$unit[$tuple]"); - $nupd_cur++; - } - if(!$nupd_cur) { - Log3 $hash->{NAME}, 4, "DbLog $name -> $ceti of $ceti events updated in table current".($usepkc?" using PK on columns $pkc":""); - } else { - Log3 $hash->{NAME}, 4, "DbLog $name -> $nupd_cur of $ceti events not updated and try to insert into table current".($usepkc?" using PK on columns $pkc":""); - $doins = 1; - } - - if ($doins) { - # events die nicht in Tabelle current updated wurden, werden in current neu eingefügt - $sth_ic->bind_param_array(1, [@timestamp_cur]); - $sth_ic->bind_param_array(2, [@device_cur]); - $sth_ic->bind_param_array(3, [@type_cur]); - $sth_ic->bind_param_array(4, [@event_cur]); - $sth_ic->bind_param_array(5, [@reading_cur]); - $sth_ic->bind_param_array(6, [@value_cur]); - $sth_ic->bind_param_array(7, [@unit_cur]); - - ($tuples, $rows) = $sth_ic->execute_array( { ArrayTupleStatus => \my @tuple_status } ); - my $nins_cur = 0; - for my $tuple (0..$#device_cur) { - my $status = $tuple_status[$tuple]; - $status = 0 if($status eq "0E0"); - next if($status); # $status ist "1" wenn insert ok - Log3 $hash->{NAME}, 3, "DbLog $name -> Insert into current rejected - TS: $timestamp[$tuple], Device: $device_cur[$tuple], Reading: $reading_cur[$tuple], Status = $status"; - $nins_cur++; - } - if(!$nins_cur) { - Log3 $hash->{NAME}, 4, "DbLog $name -> ".($#device_cur+1)." of ".($#device_cur+1)." events inserted into table current ".($usepkc?" using PK on columns $pkc":""); - } else { - Log3 $hash->{NAME}, 4, "DbLog $name -> ".($#device_cur+1-$nins_cur)." of ".($#device_cur+1)." events inserted into table current".($usepkc?" using PK on columns $pkc":""); - } - } - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; # issue Turning on AutoCommit failed - if ($@) { - Log3($name, 2, "DbLog $name -> Error commit table current - $@"); - } else { - if(!$dbh->{AutoCommit}) { - Log3($name, 4, "DbLog $name -> insert / update table current committed"); - } else { - Log3($name, 4, "DbLog $name -> insert / update table current committed by autocommit"); - } - } - } - }; - - $dbh->disconnect(); - - # SQL-Laufzeit ermitteln - my $rt = tv_interval($st); - - Log3 ($name, 5, "DbLog $name -> DbLog_PushAsync finished"); - - # Background-Laufzeit ermitteln - my $brt = tv_interval($bst); - - $rt = $rt.",".$brt; - -return "$name|$error|$rt|$rowlback"; -} - -############################################################################################# -# Auswertung non-blocking asynchron DbLog_PushAsync -############################################################################################# -sub DbLog_PushAsyncDone ($) { - my ($string) = @_; - my @a = split("\\|",$string); - my $name = $a[0]; - my $hash = $defs{$name}; - my $error = $a[1]?decode_base64($a[1]):0; - my $bt = $a[2]; - my $rowlist = $a[3]; - my $asyncmode = AttrVal($name, "asyncMode", undef); - my $memcount; - - Log3 ($name, 5, "DbLog $name -> Start DbLog_PushAsyncDone"); - - if($rowlist) { - $rowlist = decode_base64($rowlist); - my @row_array = split('§', $rowlist); - - #one Transaction - eval { - foreach my $row (@row_array) { - # Cache & CacheIndex für Events zum asynchronen Schreiben in DB - $hash->{cache}{index}++; - my $index = $hash->{cache}{index}; - $hash->{cache}{".memcache"}{$index} = $row; - } - $memcount = scalar(keys%{$hash->{cache}{".memcache"}}); - }; - } - - $memcount = $hash->{cache}{".memcache"}?scalar(keys%{$hash->{cache}{".memcache"}}):0; - readingsSingleUpdate($hash, 'CacheUsage', $memcount, 0); - - if(AttrVal($name, "showproctime", undef) && $bt) { - my ($rt,$brt) = split(",", $bt); - readingsBeginUpdate($hash); - readingsBulkUpdate($hash, "background_processing_time", sprintf("%.4f",$brt)); - readingsBulkUpdate($hash, "sql_processing_time", sprintf("%.4f",$rt)); - readingsEndUpdate($hash, 1); - } - - my $state = $error?$error:(IsDisabled($name))?"disabled":"connected"; - my $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - - if(!$asyncmode) { - delete($defs{$name}{READINGS}{NextSync}); - delete($defs{$name}{READINGS}{background_processing_time}); - delete($defs{$name}{READINGS}{sql_processing_time}); - delete($defs{$name}{READINGS}{CacheUsage}); - } - delete $hash->{HELPER}{".RUNNING_PID"}; - delete $hash->{HELPER}{LASTLIMITRUNTIME} if(!$error); - Log3 ($name, 5, "DbLog $name -> DbLog_PushAsyncDone finished"); -return; -} - -############################################################################################# -# Abbruchroutine Timeout non-blocking asynchron DbLog_PushAsync -############################################################################################# -sub DbLog_PushAsyncAborted(@) { - my ($hash,$cause) = @_; - my $name = $hash->{NAME}; - $cause = $cause?$cause:"Timeout: process terminated"; - - Log3 ($name, 2, "DbLog $name -> ".$hash->{HELPER}{".RUNNING_PID"}{fn}." ".$cause) if(!$hash->{HELPER}{SHUTDOWNSEQ}); - readingsSingleUpdate($hash,"state",$cause, 1); - delete $hash->{HELPER}{".RUNNING_PID"}; - delete $hash->{HELPER}{LASTLIMITRUNTIME}; -} - - -################################################################ -# -# zerlegt uebergebenes FHEM-Datum in die einzelnen Bestandteile -# und fuegt noch Defaultwerte ein -# uebergebenes SQL-Format: YYYY-MM-DD HH24:MI:SS -# -################################################################ -sub DbLog_explode_datetime($%) { - my ($t, %def) = @_; - my %retv; - - my (@datetime, @date, @time); - @datetime = split(" ", $t); #Datum und Zeit auftrennen - @date = split("-", $datetime[0]); - @time = split(":", $datetime[1]) if ($datetime[1]); - - if ($date[0]) {$retv{year} = $date[0];} else {$retv{year} = $def{year};} - if ($date[1]) {$retv{month} = $date[1];} else {$retv{month} = $def{month};} - if ($date[2]) {$retv{day} = $date[2];} else {$retv{day} = $def{day};} - if ($time[0]) {$retv{hour} = $time[0];} else {$retv{hour} = $def{hour};} - if ($time[1]) {$retv{minute}= $time[1];} else {$retv{minute}= $def{minute};} - if ($time[2]) {$retv{second}= $time[2];} else {$retv{second}= $def{second};} - - $retv{datetime}=DbLog_implode_datetime($retv{year}, $retv{month}, $retv{day}, $retv{hour}, $retv{minute}, $retv{second}); - - # Log 1, Dumper(%retv); - return %retv -} - -sub DbLog_implode_datetime($$$$$$) { - my ($year, $month, $day, $hour, $minute, $second) = @_; - my $retv = $year."-".$month."-".$day." ".$hour.":".$minute.":".$second; - - return $retv; -} - -################################################################################### -# Verbindungen zur DB aufbauen -################################################################################### -sub DbLog_readCfg($){ - my ($hash)= @_; - my $name = $hash->{NAME}; - - my $configfilename= $hash->{CONFIGURATION}; - my %dbconfig; - - # use generic fileRead to get configuration data - my ($err, @config) = FileRead($configfilename); - return $err if($err); - - eval join("\n", @config); - - return "could not read connection" if (!defined $dbconfig{connection}); - $hash->{dbconn} = $dbconfig{connection}; - return "could not read user" if (!defined $dbconfig{user}); - $hash->{dbuser} = $dbconfig{user}; - return "could not read password" if (!defined $dbconfig{password}); - $attr{"sec$name"}{secret} = $dbconfig{password}; - - #check the database model - if($hash->{dbconn} =~ m/pg:/i) { - $hash->{MODEL}="POSTGRESQL"; - } elsif ($hash->{dbconn} =~ m/mysql:/i) { - $hash->{MODEL}="MYSQL"; - } elsif ($hash->{dbconn} =~ m/oracle:/i) { - $hash->{MODEL}="ORACLE"; - } elsif ($hash->{dbconn} =~ m/sqlite:/i) { - $hash->{MODEL}="SQLITE"; - } else { - $hash->{MODEL}="unknown"; - Log3 $hash->{NAME}, 1, "Unknown database model found in configuration file $configfilename."; - Log3 $hash->{NAME}, 1, "Only MySQL/MariaDB, PostgreSQL, Oracle, SQLite are fully supported."; - return "unknown database type"; - } - - if($hash->{MODEL} eq "MYSQL") { - $hash->{UTF8} = defined($dbconfig{utf8})?$dbconfig{utf8}:0; - } - -return; -} - -sub DbLog_ConnectPush($;$$) { - # own $dbhp for synchronous logging and dblog_get - my ($hash,$get)= @_; - my $name = $hash->{NAME}; - my $dbconn = $hash->{dbconn}; - my $dbuser = $hash->{dbuser}; - my $dbpassword = $attr{"sec$name"}{secret}; - my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; - my ($dbhp,$state,$evt,$err); - - return 0 if(IsDisabled($name)); - - if($init_done != 1) { - InternalTimer(gettimeofday()+5, "DbLog_ConnectPush", $hash, 0); - return; - } - - Log3 $hash->{NAME}, 3, "DbLog $name - Creating Push-Handle to database $dbconn with user $dbuser" if(!$get); - - my ($useac,$useta) = DbLog_commitMode($hash); - if(!$useac) { - eval {$dbhp = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 0, mysql_enable_utf8 => $utf8 });}; - } elsif($useac == 1) { - eval {$dbhp = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, mysql_enable_utf8 => $utf8 });}; - } else { - # Server default - eval {$dbhp = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, mysql_enable_utf8 => $utf8 });}; - } - - if($@) { - $err = $@; - Log3 $hash->{NAME}, 2, "DbLog $name - Error: $@"; - } - - if(!$dbhp) { - RemoveInternalTimer($hash, "DbLog_ConnectPush"); - Log3 $hash->{NAME}, 4, "DbLog $name - Trying to connect to database"; - - $state = $err?$err:(IsDisabled($name))?"disabled":"disconnected"; - $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - - InternalTimer(gettimeofday()+5, 'DbLog_ConnectPush', $hash, 0); - Log3 $hash->{NAME}, 4, "DbLog $name - Waiting for database connection"; - return 0; - } - - $dbhp->{RaiseError} = 0; - $dbhp->{PrintError} = 1; - - Log3 $hash->{NAME}, 3, "DbLog $name - Push-Handle to db $dbconn created" if(!$get); - Log3 $hash->{NAME}, 3, "DbLog $name - UTF8 support enabled" if($utf8 && $hash->{MODEL} eq "MYSQL" && !$get); - if(!$get) { - $state = "connected"; - $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - } - - $hash->{DBHP}= $dbhp; - - if ($hash->{MODEL} eq "SQLITE") { - $dbhp->do("PRAGMA temp_store=MEMORY"); - $dbhp->do("PRAGMA synchronous=FULL"); # For maximum reliability and for robustness against database corruption, - # SQLite should always be run with its default synchronous setting of FULL. - # https://sqlite.org/howtocorrupt.html - $dbhp->do("PRAGMA journal_mode=WAL"); - $dbhp->do("PRAGMA cache_size=4000"); - } - - return 1; -} - -sub DbLog_ConnectNewDBH($) { - # new dbh for common use (except DbLog_Push and get-function) - my ($hash)= @_; - my $name = $hash->{NAME}; - my $dbconn = $hash->{dbconn}; - my $dbuser = $hash->{dbuser}; - my $dbpassword = $attr{"sec$name"}{secret}; - my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; - my $dbh; - - my ($useac,$useta) = DbLog_commitMode($hash); - if(!$useac) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 0, mysql_enable_utf8 => $utf8 });}; - } elsif($useac == 1) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, mysql_enable_utf8 => $utf8 });}; - } else { - # Server default - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, mysql_enable_utf8 => $utf8 });}; - } - - if($@) { - Log3($name, 2, "DbLog $name - $@"); - my $state = $@?$@:(IsDisabled($name))?"disabled":"disconnected"; - my $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - } - - if($dbh) { - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - return $dbh; - } else { - return 0; - } -} - -########################################################################## -# -# Prozedur zum Ausfuehren von SQL-Statements durch externe Module -# -# param1: DbLog-hash -# param2: SQL-Statement -########################################################################## -sub DbLog_ExecSQL($$) -{ - my ($hash,$sql)= @_; - Log3 $hash->{NAME}, 4, "Executing $sql"; - my $dbh = DbLog_ConnectNewDBH($hash); - return if(!$dbh); - my $sth = DbLog_ExecSQL1($hash,$dbh,$sql); - if(!$sth) { - #retry - $dbh->disconnect(); - $dbh = DbLog_ConnectNewDBH($hash); - return if(!$dbh); - $sth = DbLog_ExecSQL1($hash,$dbh,$sql); - if(!$sth) { - Log3 $hash->{NAME}, 2, "DBLog retry failed."; - $dbh->disconnect(); - return 0; - } - Log3 $hash->{NAME}, 2, "DBLog retry ok."; - } - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - $dbh->disconnect(); - return $sth; -} - -sub DbLog_ExecSQL1($$$) -{ - my ($hash,$dbh,$sql)= @_; - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - my $sth; - eval { $sth = $dbh->do($sql); }; - if($@) { - Log3 $hash->{NAME}, 2, "DBLog error: $@"; - return 0; - } - return $sth; -} - -################################################################ -# -# GET Funktion -# wird zb. zur Generierung der Plots implizit aufgerufen -# infile : [-|current|history] -# outfile: [-|ALL|INT|WEBCHART] -# -################################################################ -sub DbLog_Get($@) { - my ($hash, @a) = @_; - my $name = $hash->{NAME}; - my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; - my $dbh; - - return DbLog_dbReadings($hash,@a) if $a[1] =~ m/^Readings/; - - return "Usage: get $a[0] ...\n". - " where column_spec is :::\n" . - " see the #DbLog entries in the .gplot files\n" . - " is not used, only for compatibility for FileLog, please use - \n" . - " is a prefix, - means stdout\n" - if(int(@a) < 5); - shift @a; - my $inf = lc(shift @a); - my $outf = lc(shift @a); - my $from = shift @a; - my $to = shift @a; # Now @a contains the list of column_specs - my ($internal, @fld); - - if($inf eq "-") { - $inf = "history"; - } - - if($outf eq "int" && $inf eq "current") { - $inf = "history"; - Log3 $hash->{NAME}, 3, "Defining DbLog SVG-Plots with :CURRENT is deprecated. Please define DbLog SVG-Plots with :HISTORY instead of :CURRENT. (define SVG ::HISTORY)"; - } - - if($outf eq "int") { - $outf = "-"; - $internal = 1; - } elsif($outf eq "array"){ - - } elsif(lc($outf) eq "webchart") { - # redirect the get request to the DbLog_chartQuery function - return DbLog_chartQuery($hash, @_); - } - - my @readings = (); - my (%sqlspec, %from_datetime, %to_datetime); - - #uebergebenen Timestamp anpassen - #moegliche Formate: YYYY | YYYY-MM | YYYY-MM-DD | YYYY-MM-DD_HH24 - $from =~ s/_/\ /g; - $to =~ s/_/\ /g; - %from_datetime = DbLog_explode_datetime($from, DbLog_explode_datetime("2000-01-01 00:00:00", ())); - %to_datetime = DbLog_explode_datetime($to, DbLog_explode_datetime("2099-01-01 00:00:00", ())); - $from = $from_datetime{datetime}; - $to = $to_datetime{datetime}; - - if($to =~ /(\d{4})-(\d{2})-(\d{2}) 23:59:59/) { - # 03.09.2018 : https://forum.fhem.de/index.php/topic,65860.msg815640.html#msg815640 - $to =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/; - my $tc = timelocal($6, $5, $4, $3, $2-1, $1-1900); - $tc++; - $to = strftime "%Y-%m-%d %H:%M:%S", localtime($tc); - } - - my ($retval,$retvaldummy,$hour,$sql_timestamp, $sql_device, $sql_reading, $sql_value, $type, $event, $unit) = ""; - my @ReturnArray; - my $writeout = 0; - my (@min, @max, @sum, @cnt, @lastv, @lastd, @mind, @maxd); - my (%tstamp, %lasttstamp, $out_tstamp, $out_value, $minval, $maxval, $deltacalc); #fuer delta-h/d Berechnung - - #extract the Device:Reading arguments into @readings array - for(my $i = 0; $i < int(@a); $i++) { - @fld = split(":", $a[$i], 5); - $readings[$i][0] = $fld[0]; # Device - $readings[$i][1] = $fld[1]; # Reading - $readings[$i][2] = $fld[2]; # Default - $readings[$i][3] = $fld[3]; # function - $readings[$i][4] = $fld[4]; # regexp - - $readings[$i][1] = "%" if(!$readings[$i][1] || length($readings[$i][1])==0); #falls Reading nicht gefuellt setze Joker - } - - $dbh = $hash->{DBHP}; - if ( !$dbh || not $dbh->ping ) { - # DB Session dead, try to reopen now ! - return "Can't connect to database." if(!DbLog_ConnectPush($hash,1)); - $dbh = $hash->{DBHP}; - } - - if( $hash->{PID} != $$ ) { - #create new connection for plotfork - $dbh->disconnect(); - return "Can't connect to database." if(!DbLog_ConnectPush($hash,1)); - $dbh = $hash->{DBHP}; - } - - #vorbereiten der DB-Abfrage, DB-Modell-abhaengig - if ($hash->{MODEL} eq "POSTGRESQL") { - $sqlspec{get_timestamp} = "TO_CHAR(TIMESTAMP, 'YYYY-MM-DD HH24:MI:SS')"; - $sqlspec{from_timestamp} = "TO_TIMESTAMP('$from', 'YYYY-MM-DD HH24:MI:SS')"; - $sqlspec{to_timestamp} = "TO_TIMESTAMP('$to', 'YYYY-MM-DD HH24:MI:SS')"; - #$sqlspec{reading_clause} = "(DEVICE || '|' || READING)"; - $sqlspec{order_by_hour} = "TO_CHAR(TIMESTAMP, 'YYYY-MM-DD HH24')"; - $sqlspec{max_value} = "MAX(VALUE)"; - $sqlspec{day_before} = "($sqlspec{from_timestamp} - INTERVAL '1 DAY')"; - } elsif ($hash->{MODEL} eq "ORACLE") { - $sqlspec{get_timestamp} = "TO_CHAR(TIMESTAMP, 'YYYY-MM-DD HH24:MI:SS')"; - $sqlspec{from_timestamp} = "TO_TIMESTAMP('$from', 'YYYY-MM-DD HH24:MI:SS')"; - $sqlspec{to_timestamp} = "TO_TIMESTAMP('$to', 'YYYY-MM-DD HH24:MI:SS')"; - $sqlspec{order_by_hour} = "TO_CHAR(TIMESTAMP, 'YYYY-MM-DD HH24')"; - $sqlspec{max_value} = "MAX(VALUE)"; - $sqlspec{day_before} = "DATE_SUB($sqlspec{from_timestamp},INTERVAL 1 DAY)"; - } elsif ($hash->{MODEL} eq "MYSQL") { - $sqlspec{get_timestamp} = "DATE_FORMAT(TIMESTAMP, '%Y-%m-%d %H:%i:%s')"; - $sqlspec{from_timestamp} = "STR_TO_DATE('$from', '%Y-%m-%d %H:%i:%s')"; - $sqlspec{to_timestamp} = "STR_TO_DATE('$to', '%Y-%m-%d %H:%i:%s')"; - $sqlspec{order_by_hour} = "DATE_FORMAT(TIMESTAMP, '%Y-%m-%d %H')"; - $sqlspec{max_value} = "MAX(CAST(VALUE AS DECIMAL(20,8)))"; - $sqlspec{day_before} = "DATE_SUB($sqlspec{from_timestamp},INTERVAL 1 DAY)"; - } elsif ($hash->{MODEL} eq "SQLITE") { - $sqlspec{get_timestamp} = "TIMESTAMP"; - $sqlspec{from_timestamp} = "'$from'"; - $sqlspec{to_timestamp} = "'$to'"; - $sqlspec{order_by_hour} = "strftime('%Y-%m-%d %H', TIMESTAMP)"; - $sqlspec{max_value} = "MAX(VALUE)"; - $sqlspec{day_before} = "date($sqlspec{from_timestamp},'-1 day')"; - } else { - $sqlspec{get_timestamp} = "TIMESTAMP"; - $sqlspec{from_timestamp} = "'$from'"; - $sqlspec{to_timestamp} = "'$to'"; - $sqlspec{order_by_hour} = "strftime('%Y-%m-%d %H', TIMESTAMP)"; - $sqlspec{max_value} = "MAX(VALUE)"; - $sqlspec{day_before} = "date($sqlspec{from_timestamp},'-1 day')"; - } - - if($outf =~ m/(all|array)/) { - $sqlspec{all} = ",TYPE,EVENT,UNIT"; - $sqlspec{all_max} = ",MAX(TYPE) AS TYPE,MAX(EVENT) AS EVENT,MAX(UNIT) AS UNIT"; - } else { - $sqlspec{all} = ""; - $sqlspec{all_max} = ""; - } - - for(my $i=0; $i> 1); - $max[$i] = -(~0 >> 1); - $sum[$i] = 0; - $cnt[$i] = 0; - $lastv[$i] = 0; - $lastd[$i] = "undef"; - $mind[$i] = "undef"; - $maxd[$i] = "undef"; - $minval = (~0 >> 1); - $maxval = -(~0 >> 1); - $deltacalc = 0; - - if($readings[$i]->[3] && ($readings[$i]->[3] eq "delta-h" || $readings[$i]->[3] eq "delta-d")) { - $deltacalc = 1; - } - - my $stm; - my $stm2; - my $stmdelta; - $stm = "SELECT - MAX($sqlspec{get_timestamp}) AS TIMESTAMP, - MAX(DEVICE) AS DEVICE, - MAX(READING) AS READING, - $sqlspec{max_value} - $sqlspec{all_max} "; - - $stm .= "FROM current " if($inf eq "current"); - $stm .= "FROM history " if($inf eq "history"); - - $stm .= "WHERE 1=1 "; - - $stm .= "AND DEVICE = '".$readings[$i]->[0]."' " if ($readings[$i]->[0] !~ m(\%)); - $stm .= "AND DEVICE LIKE '".$readings[$i]->[0]."' " if(($readings[$i]->[0] !~ m(^\%$)) && ($readings[$i]->[0] =~ m(\%))); - - $stm .= "AND READING = '".$readings[$i]->[1]."' " if ($readings[$i]->[1] !~ m(\%)); - $stm .= "AND READING LIKE '".$readings[$i]->[1]."' " if(($readings[$i]->[1] !~ m(^%$)) && ($readings[$i]->[1] =~ m(\%))); - - $stmdelta = $stm; - - $stm .= "AND TIMESTAMP < $sqlspec{from_timestamp} "; - $stm .= "AND TIMESTAMP > $sqlspec{day_before} "; - - $stm .= "UNION ALL "; - - $stm2 = "SELECT - $sqlspec{get_timestamp}, - DEVICE, - READING, - VALUE - $sqlspec{all} "; - - $stm2 .= "FROM current " if($inf eq "current"); - $stm2 .= "FROM history " if($inf eq "history"); - - $stm2 .= "WHERE 1=1 "; - - $stm2 .= "AND DEVICE = '".$readings[$i]->[0]."' " if ($readings[$i]->[0] !~ m(\%)); - $stm2 .= "AND DEVICE LIKE '".$readings[$i]->[0]."' " if(($readings[$i]->[0] !~ m(^\%$)) && ($readings[$i]->[0] =~ m(\%))); - - $stm2 .= "AND READING = '".$readings[$i]->[1]."' " if ($readings[$i]->[1] !~ m(\%)); - $stm2 .= "AND READING LIKE '".$readings[$i]->[1]."' " if(($readings[$i]->[1] !~ m(^%$)) && ($readings[$i]->[1] =~ m(\%))); - - $stm2 .= "AND TIMESTAMP >= $sqlspec{from_timestamp} "; - $stm2 .= "AND TIMESTAMP <= $sqlspec{to_timestamp} "; # 03.09.2018 : https://forum.fhem.de/index.php/topic,65860.msg815640.html#msg815640 - $stm2 .= "ORDER BY TIMESTAMP"; - - if($deltacalc) { - $stmdelta .= "AND TIMESTAMP >= $sqlspec{from_timestamp} "; - $stmdelta .= "AND TIMESTAMP <= $sqlspec{to_timestamp} "; # 03.09.2018 : https://forum.fhem.de/index.php/topic,65860.msg815640.html#msg815640 - - $stmdelta .= "GROUP BY $sqlspec{order_by_hour} " if($deltacalc); - $stmdelta .= "ORDER BY TIMESTAMP"; - $stm .= $stmdelta; - } else { - $stm = $stm2; - } - - Log3 $hash->{NAME}, 4, "Processing Statement: $stm"; - - my $sth= $dbh->prepare($stm) || - return "Cannot prepare statement $stm: $DBI::errstr"; - my $rc= $sth->execute() || - return "Cannot execute statement $stm: $DBI::errstr"; - - if($outf =~ m/(all|array)/) { - $sth->bind_columns(undef, \$sql_timestamp, \$sql_device, \$sql_reading, \$sql_value, \$type, \$event, \$unit); - } - else { - $sth->bind_columns(undef, \$sql_timestamp, \$sql_device, \$sql_reading, \$sql_value); - } - - if ($outf =~ m/(all)/) { - $retval .= "Timestamp: Device, Type, Event, Reading, Value, Unit\n"; - $retval .= "=====================================================\n"; - } - - while($sth->fetch()) { - - ############ Auswerten des 5. Parameters: Regexp ################### - # die Regexep wird vor der Function ausgewertet und der Wert im Feld - # Value angepasst. - #################################################################### - if($readings[$i]->[4]) { - #evaluate - my $val = $sql_value; - my $ts = $sql_timestamp; - eval("$readings[$i]->[4]"); - $sql_value = $val; - $sql_timestamp = $ts; - if($@) {Log3 $hash->{NAME}, 3, "DbLog: Error in inline function: <".$readings[$i]->[4].">, Error: $@";} - } - - if($sql_timestamp lt $from && $deltacalc) { - if(Scalar::Util::looks_like_number($sql_value)){ - #nur setzen wenn nummerisch - $minval = $sql_value if($sql_value < $minval); - $maxval = $sql_value if($sql_value > $maxval); - $lastv[$i] = $sql_value; - } - } else { - - $writeout = 0; - $out_value = ""; - $out_tstamp = ""; - $retvaldummy = ""; - - if($readings[$i]->[4]) { - $out_tstamp = $sql_timestamp; - $writeout=1 if(!$deltacalc); - } - - ############ Auswerten des 4. Parameters: function ################### - if($readings[$i]->[3] && $readings[$i]->[3] eq "int") { - #nur den integerwert uebernehmen falls zb value=15°C - $out_value = $1 if($sql_value =~ m/^(\d+).*/o); - $out_tstamp = $sql_timestamp; - $writeout=1; - - } elsif ($readings[$i]->[3] && $readings[$i]->[3] =~ m/^int(\d+).*/o) { - #Uebernehme den Dezimalwert mit den angegebenen Stellen an Nachkommastellen - $out_value = $1 if($sql_value =~ m/^([-\.\d]+).*/o); - $out_tstamp = $sql_timestamp; - $writeout=1; - - } elsif ($readings[$i]->[3] && $readings[$i]->[3] eq "delta-ts" && lc($sql_value) !~ m(ignore)) { - #Berechung der vergangen Sekunden seit dem letten Logeintrag - #zb. die Zeit zwischen on/off - my @a = split("[- :]", $sql_timestamp); - my $akt_ts = mktime($a[5],$a[4],$a[3],$a[2],$a[1]-1,$a[0]-1900,0,0,-1); - if($lastd[$i] ne "undef") { - @a = split("[- :]", $lastd[$i]); - } - my $last_ts = mktime($a[5],$a[4],$a[3],$a[2],$a[1]-1,$a[0]-1900,0,0,-1); - $out_tstamp = $sql_timestamp; - $out_value = sprintf("%02d", $akt_ts - $last_ts); - if(lc($sql_value) =~ m(hide)){$writeout=0;} else {$writeout=1;} - - } elsif ($readings[$i]->[3] && $readings[$i]->[3] eq "delta-h") { - #Berechnung eines Stundenwertes - %tstamp = DbLog_explode_datetime($sql_timestamp, ()); - if($lastd[$i] eq "undef") { - %lasttstamp = DbLog_explode_datetime($sql_timestamp, ()); - $lasttstamp{hour} = "00"; - } else { - %lasttstamp = DbLog_explode_datetime($lastd[$i], ()); - } - # 04 01 - # 06 23 - if("$tstamp{hour}" ne "$lasttstamp{hour}") { - # Aenderung der stunde, Berechne Delta - #wenn die Stundendifferenz größer 1 ist muss ein Dummyeintrag erstellt werden - $retvaldummy = ""; - if(($tstamp{hour}-$lasttstamp{hour}) > 1) { - for (my $j=$lasttstamp{hour}+1; $j < $tstamp{hour}; $j++) { - $out_value = "0"; - $hour = $j; - $hour = '0'.$j if $j<10; - $cnt[$i]++; - $out_tstamp = DbLog_implode_datetime($tstamp{year}, $tstamp{month}, $tstamp{day}, $hour, "30", "00"); - if ($outf =~ m/(all)/) { - # Timestamp: Device, Type, Event, Reading, Value, Unit - $retvaldummy .= sprintf("%s: %s, %s, %s, %s, %s, %s\n", $out_tstamp, $sql_device, $type, $event, $sql_reading, $out_value, $unit); - - } elsif ($outf =~ m/(array)/) { - push(@ReturnArray, {"tstamp" => $out_tstamp, "device" => $sql_device, "type" => $type, "event" => $event, "reading" => $sql_reading, "value" => $out_value, "unit" => $unit}); - - } else { - $out_tstamp =~ s/\ /_/g; #needed by generating plots - $retvaldummy .= "$out_tstamp $out_value\n"; - } - } - } - if(($tstamp{hour}-$lasttstamp{hour}) < 0) { - for (my $j=0; $j < $tstamp{hour}; $j++) { - $out_value = "0"; - $hour = $j; - $hour = '0'.$j if $j<10; - $cnt[$i]++; - $out_tstamp = DbLog_implode_datetime($tstamp{year}, $tstamp{month}, $tstamp{day}, $hour, "30", "00"); - if ($outf =~ m/(all)/) { - # Timestamp: Device, Type, Event, Reading, Value, Unit - $retvaldummy .= sprintf("%s: %s, %s, %s, %s, %s, %s\n", $out_tstamp, $sql_device, $type, $event, $sql_reading, $out_value, $unit); - - } elsif ($outf =~ m/(array)/) { - push(@ReturnArray, {"tstamp" => $out_tstamp, "device" => $sql_device, "type" => $type, "event" => $event, "reading" => $sql_reading, "value" => $out_value, "unit" => $unit}); - - } else { - $out_tstamp =~ s/\ /_/g; #needed by generating plots - $retvaldummy .= "$out_tstamp $out_value\n"; - } - } - } - $out_value = sprintf("%g", $maxval - $minval); - $sum[$i] += $out_value; - $cnt[$i]++; - $out_tstamp = DbLog_implode_datetime($lasttstamp{year}, $lasttstamp{month}, $lasttstamp{day}, $lasttstamp{hour}, "30", "00"); - #$minval = (~0 >> 1); - $minval = $maxval; -# $maxval = -(~0 >> 1); - $writeout=1; - } - } elsif ($readings[$i]->[3] && $readings[$i]->[3] eq "delta-d") { - #Berechnung eines Tageswertes - %tstamp = DbLog_explode_datetime($sql_timestamp, ()); - if($lastd[$i] eq "undef") { - %lasttstamp = DbLog_explode_datetime($sql_timestamp, ()); - } else { - %lasttstamp = DbLog_explode_datetime($lastd[$i], ()); - } - if("$tstamp{day}" ne "$lasttstamp{day}") { - # Aenderung des Tages, Berechne Delta - $out_value = sprintf("%g", $maxval - $minval); - $sum[$i] += $out_value; - $cnt[$i]++; - $out_tstamp = DbLog_implode_datetime($lasttstamp{year}, $lasttstamp{month}, $lasttstamp{day}, "12", "00", "00"); -# $minval = (~0 >> 1); - $minval = $maxval; -# $maxval = -(~0 >> 1); - $writeout=1; - } - } else { - $out_value = $sql_value; - $out_tstamp = $sql_timestamp; - $writeout=1; - } - - # Wenn Attr SuppressUndef gesetzt ist, dann ausfiltern aller undef-Werte - $writeout = 0 if (!defined($sql_value) && AttrVal($hash->{NAME}, "suppressUndef", 0)); - - ###################### Ausgabe ########################### - if($writeout) { - if ($outf =~ m/(all)/) { - # Timestamp: Device, Type, Event, Reading, Value, Unit - $retval .= sprintf("%s: %s, %s, %s, %s, %s, %s\n", $out_tstamp, $sql_device, $type, $event, $sql_reading, $out_value, $unit); - $retval .= $retvaldummy; - - } elsif ($outf =~ m/(array)/) { - push(@ReturnArray, {"tstamp" => $out_tstamp, "device" => $sql_device, "type" => $type, "event" => $event, "reading" => $sql_reading, "value" => $out_value, "unit" => $unit}); - - } else { - $out_tstamp =~ s/\ /_/g; #needed by generating plots - $retval .= "$out_tstamp $out_value\n"; - $retval .= $retvaldummy; - } - } - - if(Scalar::Util::looks_like_number($sql_value)){ - #nur setzen wenn nummerisch - if($deltacalc) { - if(Scalar::Util::looks_like_number($out_value)){ - if($out_value < $min[$i]) { - $min[$i] = $out_value; - $mind[$i] = $out_tstamp; - } - if($out_value > $max[$i]) { - $max[$i] = $out_value; - $maxd[$i] = $out_tstamp; - } - } - $maxval = $sql_value; - } else { - if($sql_value < $min[$i]) { - $min[$i] = $sql_value; - $mind[$i] = $sql_timestamp; - } - if($sql_value > $max[$i]) { - $max[$i] = $sql_value; - $maxd[$i] = $sql_timestamp; - } - $sum[$i] += $sql_value; - $minval = $sql_value if($sql_value < $minval); - $maxval = $sql_value if($sql_value > $maxval); - } - } else { - $min[$i] = 0; - $max[$i] = 0; - $sum[$i] = 0; - $minval = 0; - $maxval = 0; - } - if(!$deltacalc) { - $cnt[$i]++; - $lastv[$i] = $sql_value; - } else { - $lastv[$i] = $out_value if($out_value); - } - $lastd[$i] = $sql_timestamp; - } - } #while fetchrow - - ######## den letzten Abschlusssatz rausschreiben ########## - if($readings[$i]->[3] && ($readings[$i]->[3] eq "delta-h" || $readings[$i]->[3] eq "delta-d")) { - if($lastd[$i] eq "undef") { - $out_value = "0"; - $out_tstamp = DbLog_implode_datetime($from_datetime{year}, $from_datetime{month}, $from_datetime{day}, $from_datetime{hour}, "30", "00") if($readings[$i]->[3] eq "delta-h"); - $out_tstamp = DbLog_implode_datetime($from_datetime{year}, $from_datetime{month}, $from_datetime{day}, "12", "00", "00") if($readings[$i]->[3] eq "delta-d"); - } else { - %lasttstamp = DbLog_explode_datetime($lastd[$i], ()); - $out_value = sprintf("%g", $maxval - $minval); - $out_tstamp = DbLog_implode_datetime($lasttstamp{year}, $lasttstamp{month}, $lasttstamp{day}, $lasttstamp{hour}, "30", "00") if($readings[$i]->[3] eq "delta-h"); - $out_tstamp = DbLog_implode_datetime($lasttstamp{year}, $lasttstamp{month}, $lasttstamp{day}, "12", "00", "00") if($readings[$i]->[3] eq "delta-d"); - } - $sum[$i] += $out_value; - $cnt[$i]++; - if($outf =~ m/(all)/) { - $retval .= sprintf("%s: %s %s %s %s %s %s\n", $out_tstamp, $sql_device, $type, $event, $sql_reading, $out_value, $unit); - - } elsif ($outf =~ m/(array)/) { - push(@ReturnArray, {"tstamp" => $out_tstamp, "device" => $sql_device, "type" => $type, "event" => $event, "reading" => $sql_reading, "value" => $out_value, "unit" => $unit}); - - } else { - $out_tstamp =~ s/\ /_/g; #needed by generating plots - $retval .= "$out_tstamp $out_value\n"; - } - } - # DatenTrenner setzen - $retval .= "#$readings[$i]->[0]"; - $retval .= ":"; - $retval .= "$readings[$i]->[1]" if($readings[$i]->[1]); - $retval .= ":"; - $retval .= "$readings[$i]->[2]" if($readings[$i]->[2]); - $retval .= ":"; - $retval .= "$readings[$i]->[3]" if($readings[$i]->[3]); - $retval .= ":"; - $retval .= "$readings[$i]->[4]" if($readings[$i]->[4]); - $retval .= "\n"; - } #for @readings - - #Ueberfuehren der gesammelten Werte in die globale Variable %data - for(my $j=0; $jdisconnect() if( $hash->{PID} != $$ ); - - if($internal) { - $internal_data = \$retval; - return undef; - - } elsif($outf =~ m/(array)/) { - return @ReturnArray; - - } else { - $retval = Encode::encode_utf8($retval) if($utf8); - # Log3 $name, 5, "DbLog $name -> Result of get:\n$retval"; - return $retval; - } -} - -########################################################################## -# -# Konfigurationscheck DbLog <-> Datenbank -# -########################################################################## -sub DbLog_configcheck($) { - my ($hash)= @_; - my $name = $hash->{NAME}; - my $dbmodel = $hash->{MODEL}; - my $dbconn = $hash->{dbconn}; - my $dbname = (split(/;|=/, $dbconn))[1]; - my ($check, $rec,%dbconfig); - - ### Start - ####################################################################### - $check = ""; - $check .= "Result of DbLog version check

"; - $check .= "Used DbLog version: $hash->{VERSION}
"; - $check .= "Recommendation: Your running version may be the current one. Please check for updates of DbLog periodically.

"; - - ### Configuration read check - ####################################################################### - $check .= "Result of configuration read check

"; - my $st = configDBUsed()?"configDB (don't forget upload configuration file if changed. Use \"configdb filelist\" and look for your configuration file.)":"file"; - $check .= "Connection parameter store type: $st
"; - my ($err, @config) = FileRead($hash->{CONFIGURATION}); - if (!$err) { - eval join("\n", @config); - $rec = "parameter: "; - $rec .= "Connection -> could not read, " if (!defined $dbconfig{connection}); - $rec .= "Connection -> ".$dbconfig{connection}.", " if (defined $dbconfig{connection}); - $rec .= "User -> could not read, " if (!defined $dbconfig{user}); - $rec .= "User -> ".$dbconfig{user}.", " if (defined $dbconfig{user}); - $rec .= "Password -> could not read " if (!defined $dbconfig{password}); - $rec .= "Password -> read o.k. " if (defined $dbconfig{password}); - } else { - $rec = $err; - } - $check .= "Connection $rec

"; - - ### Connection und Encoding check - ####################################################################### - my (@ce,@se); - my ($chutf8mod,$chutf8dat); - if($dbmodel =~ /MYSQL/) { - @ce = DbLog_sqlget($hash,"SHOW VARIABLES LIKE 'character_set_connection'"); - $chutf8mod = @ce?uc($ce[1]):"no result"; - @se = DbLog_sqlget($hash,"SHOW VARIABLES LIKE 'character_set_database'"); - $chutf8dat = @se?uc($se[1]):"no result"; - if($chutf8mod eq $chutf8dat) { - $rec = "settings o.k."; - } else { - $rec = "Both encodings should be identical. You can adjust the usage of UTF8 connection by setting the UTF8 parameter in file '$hash->{CONFIGURATION}' to the right value. "; - } - } - if($dbmodel =~ /POSTGRESQL/) { - @ce = DbLog_sqlget($hash,"SHOW CLIENT_ENCODING"); - $chutf8mod = @ce?uc($ce[0]):"no result"; - @se = DbLog_sqlget($hash,"select character_set_name from information_schema.character_sets"); - $chutf8dat = @se?uc($se[0]):"no result"; - if($chutf8mod eq $chutf8dat) { - $rec = "settings o.k."; - } else { - $rec = "This is only an information. PostgreSQL supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the pg_conversion system catalog. PostgreSQL comes with some predefined conversions."; - } - } - if($dbmodel =~ /SQLITE/) { - @ce = DbLog_sqlget($hash,"PRAGMA encoding"); - $chutf8dat = @ce?uc($ce[0]):"no result"; - @se = DbLog_sqlget($hash,"PRAGMA table_info(history)"); - $rec = "This is only an information about text encoding used by the main database."; - } - - $check .= "Result of connection check

"; - - if(@ce && @se) { - $check .= "Connection to database $dbname successfully done.
"; - $check .= "Recommendation: settings o.k.

"; - } - - if(!@ce || !@se) { - $check .= "Connection to database was not successful.
"; - $check .= "Recommendation: Plese check logfile for further information.

"; - $check .= ""; - return $check; - } - $check .= "Result of encoding check

"; - $check .= "Encoding used by Client (connection): $chutf8mod
" if($dbmodel !~ /SQLITE/); - $check .= "Encoding used by DB $dbname: $chutf8dat
"; - $check .= "Recommendation: $rec

"; - - ### Check Betriebsmodus - ####################################################################### - my $mode = $hash->{MODE}; - my $sfx = AttrVal("global", "language", "EN"); - $sfx = ($sfx eq "EN" ? "" : "_$sfx"); - - $check .= "Result of logmode check

"; - $check .= "Logmode of DbLog-device $name is: $mode
"; - if($mode =~ /asynchronous/) { - my $max = AttrVal("global", "blockingCallMax", 0); - if(!$max || $max >= 6) { - $rec = "settings o.k."; - } else { - $rec = "WARNING - you are running asynchronous mode that is recommended, but the value of global device attribute \"blockingCallMax\" is set quite small.
"; - $rec .= "This may cause problems in operation. It is recommended to increase the global blockingCallMax attribute."; - } - } else { - $rec = "Switch $name to the asynchronous logmode by setting the 'asyncMode' attribute. The advantage of this mode is to log events non-blocking.
"; - $rec .= "There are attributes 'syncInterval' and 'cacheLimit' relevant for this working mode.
"; - $rec .= "Please refer to commandref for further informations about these attributes."; - } - $check .= "Recommendation: $rec

"; - - if($mode =~ /asynchronous/) { - my $shutdownWait = AttrVal($name,"shutdownWait",undef); - my $bpt = ReadingsVal($name,"background_processing_time",undef); - my $bptv = defined($bpt)?int($bpt)+2:2; - # $shutdownWait = defined($shutdownWait)?$shutdownWait:undef; - my $sdw = defined($shutdownWait)?$shutdownWait:" "; - $check .= "Result of shutdown sequence preparation check

"; - $check .= "Attribute \"shutdownWait\" is set to: $sdw
"; - if(!defined($shutdownWait) || $shutdownWait < $bptv) { - if(!$bpt) { - $rec = "Due to Reading \"background_processing_time\" is not available (you may set attribute \"showproctime\"), there is only a rough estimate to
"; - $rec .= "set attribute \"shutdownWait\" to $bptv seconds.
"; - } else { - $rec = "Please set this attribute at least to $bptv seconds to avoid data loss when system shutdown is initiated."; - } - } else { - if(!$bpt) { - $rec = "The setting may be ok. But due to the Reading \"background_processing_time\" is not available (you may set attribute \"showproctime\"), the current
"; - $rec .= "setting is only a rough estimate.
"; - } else { - $rec = "settings o.k."; - } - } - $check .= "Recommendation: $rec

"; - } - - ### Check Plot Erstellungsmodus - ####################################################################### - $check .= "Result of plot generation method check

"; - my @webdvs = devspec2array("TYPE=FHEMWEB:FILTER=STATE=Initialized"); - my $forks = 1; - my $wall; - foreach (@webdvs) { - my $web = $_; - $wall .= $web.": plotfork=".AttrVal($web,"plotfork",0)."
"; - $forks = 0 if(!AttrVal($web,"plotfork",0)); - } - if(!$forks) { - $check .= "WARNING - at least one of your FHEMWEB devices have attribute \"plotfork = 1\" not set. This may cause blocking situations when creating plots.
"; - $check .= $wall; - $rec = "You should set attribute \"plotfork = 1\" in relevant devices"; - } else { - $check .= $wall; - $rec = "settings o.k."; - } - $check .= "Recommendation: $rec

"; - - ### Check Spaltenbreite history - ####################################################################### - my (@sr_dev,@sr_typ,@sr_evt,@sr_rdg,@sr_val,@sr_unt); - my ($cdat_dev,$cdat_typ,$cdat_evt,$cdat_rdg,$cdat_val,$cdat_unt); - my ($cmod_dev,$cmod_typ,$cmod_evt,$cmod_rdg,$cmod_val,$cmod_unt); - - if($dbmodel =~ /MYSQL/) { - @sr_dev = DbLog_sqlget($hash,"SHOW FIELDS FROM history where FIELD='DEVICE'"); - @sr_typ = DbLog_sqlget($hash,"SHOW FIELDS FROM history where FIELD='TYPE'"); - @sr_evt = DbLog_sqlget($hash,"SHOW FIELDS FROM history where FIELD='EVENT'"); - @sr_rdg = DbLog_sqlget($hash,"SHOW FIELDS FROM history where FIELD='READING'"); - @sr_val = DbLog_sqlget($hash,"SHOW FIELDS FROM history where FIELD='VALUE'"); - @sr_unt = DbLog_sqlget($hash,"SHOW FIELDS FROM history where FIELD='UNIT'"); - } - if($dbmodel =~ /POSTGRESQL/) { - @sr_dev = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='history' and column_name='device'"); - @sr_typ = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='history' and column_name='type'"); - @sr_evt = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='history' and column_name='event'"); - @sr_rdg = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='history' and column_name='reading'"); - @sr_val = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='history' and column_name='value'"); - @sr_unt = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='history' and column_name='unit'"); - } - if($dbmodel =~ /SQLITE/) { - my $dev = (DbLog_sqlget($hash,"SELECT sql FROM sqlite_master WHERE name = 'history'"))[0]; - $cdat_dev = $dev?$dev:"no result"; - $cdat_typ = $cdat_evt = $cdat_rdg = $cdat_val = $cdat_unt = $cdat_dev; - $cdat_dev =~ s/.*DEVICE.varchar\(([\d]*)\).*/$1/e; - $cdat_typ =~ s/.*TYPE.varchar\(([\d]*)\).*/$1/e; - $cdat_evt =~ s/.*EVENT.varchar\(([\d]*)\).*/$1/e; - $cdat_rdg =~ s/.*READING.varchar\(([\d]*)\).*/$1/e; - $cdat_val =~ s/.*VALUE.varchar\(([\d]*)\).*/$1/e; - $cdat_unt =~ s/.*UNIT.varchar\(([\d]*)\).*/$1/e; - } - if ($dbmodel !~ /SQLITE/) { - $cdat_dev = @sr_dev?($sr_dev[1]):"no result"; - $cdat_dev =~ tr/varchar\(|\)//d if($cdat_dev ne "no result"); - $cdat_typ = @sr_typ?($sr_typ[1]):"no result"; - $cdat_typ =~ tr/varchar\(|\)//d if($cdat_typ ne "no result"); - $cdat_evt = @sr_evt?($sr_evt[1]):"no result"; - $cdat_evt =~ tr/varchar\(|\)//d if($cdat_evt ne "no result"); - $cdat_rdg = @sr_rdg?($sr_rdg[1]):"no result"; - $cdat_rdg =~ tr/varchar\(|\)//d if($cdat_rdg ne "no result"); - $cdat_val = @sr_val?($sr_val[1]):"no result"; - $cdat_val =~ tr/varchar\(|\)//d if($cdat_val ne "no result"); - $cdat_unt = @sr_unt?($sr_unt[1]):"no result"; - $cdat_unt =~ tr/varchar\(|\)//d if($cdat_unt ne "no result"); - } - $cmod_dev = $hash->{HELPER}{DEVICECOL}; - $cmod_typ = $hash->{HELPER}{TYPECOL}; - $cmod_evt = $hash->{HELPER}{EVENTCOL}; - $cmod_rdg = $hash->{HELPER}{READINGCOL}; - $cmod_val = $hash->{HELPER}{VALUECOL}; - $cmod_unt = $hash->{HELPER}{UNITCOL}; - - if($cdat_dev >= $cmod_dev && $cdat_typ >= $cmod_typ && $cdat_evt >= $cmod_evt && $cdat_rdg >= $cmod_rdg && $cdat_val >= $cmod_val && $cdat_unt >= $cmod_unt) { - $rec = "settings o.k."; - } else { - if ($dbmodel !~ /SQLITE/) { - $rec = "The relation between column width in table history and the field width used in device $name don't meet the requirements. "; - $rec .= "Please make sure that the width of database field definition is equal or larger than the field width used by the module. Compare the given results.
"; - $rec .= "Currently the default values for field width are:

"; - $rec .= "DEVICE: $columns{DEVICE}
"; - $rec .= "TYPE: $columns{TYPE}
"; - $rec .= "EVENT: $columns{EVENT}
"; - $rec .= "READING: $columns{READING}
"; - $rec .= "VALUE: $columns{VALUE}
"; - $rec .= "UNIT: $columns{UNIT}

"; - $rec .= "You can change the column width in database by a statement like 'alter table history modify VALUE varchar(128);' (example for changing field 'VALUE'). "; - $rec .= "You can do it for example by executing 'sqlCmd' in DbRep or in a SQL-Editor of your choice. (switch $name to asynchron mode for non-blocking).
"; - $rec .= "Alternatively the field width used by $name can be adjusted by setting attributes 'colEvent', 'colReading', 'colValue'. (pls. refer to commandref)"; - } else { - $rec = "WARNING - The relation between column width in table history and the field width used by device $name should be equal but it differs."; - $rec .= "The field width used by $name can be adjusted by setting attributes 'colEvent', 'colReading', 'colValue'. (pls. refer to commandref)"; - $rec .= "Because you use SQLite this is only a warning. Normally the database can handle these differences. "; - } - } - - $check .= "Result of table 'history' check

"; - $check .= "Column width set in DB $dbname.history: 'DEVICE' = $cdat_dev, 'TYPE' = $cdat_typ, 'EVENT' = $cdat_evt, 'READING' = $cdat_rdg, 'VALUE' = $cdat_val, 'UNIT' = $cdat_unt
"; - $check .= "Column width used by $name: 'DEVICE' = $cmod_dev, 'TYPE' = $cmod_typ, 'EVENT' = $cmod_evt, 'READING' = $cmod_rdg, 'VALUE' = $cmod_val, 'UNIT' = $cmod_unt
"; - $check .= "Recommendation: $rec

"; - - ### Check Spaltenbreite current - ####################################################################### - if($dbmodel =~ /MYSQL/) { - @sr_dev = DbLog_sqlget($hash,"SHOW FIELDS FROM current where FIELD='DEVICE'"); - @sr_typ = DbLog_sqlget($hash,"SHOW FIELDS FROM current where FIELD='TYPE'"); - @sr_evt = DbLog_sqlget($hash,"SHOW FIELDS FROM current where FIELD='EVENT'"); - @sr_rdg = DbLog_sqlget($hash,"SHOW FIELDS FROM current where FIELD='READING'"); - @sr_val = DbLog_sqlget($hash,"SHOW FIELDS FROM current where FIELD='VALUE'"); - @sr_unt = DbLog_sqlget($hash,"SHOW FIELDS FROM current where FIELD='UNIT'"); - } - - if($dbmodel =~ /POSTGRESQL/) { - @sr_dev = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='current' and column_name='device'"); - @sr_typ = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='current' and column_name='type'"); - @sr_evt = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='current' and column_name='event'"); - @sr_rdg = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='current' and column_name='reading'"); - @sr_val = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='current' and column_name='value'"); - @sr_unt = DbLog_sqlget($hash,"select column_name,character_maximum_length from information_schema.columns where table_name='current' and column_name='unit'"); - } - if($dbmodel =~ /SQLITE/) { - my $dev = (DbLog_sqlget($hash,"SELECT sql FROM sqlite_master WHERE name = 'current'"))[0]; - $cdat_dev = $dev?$dev:"no result"; - $cdat_typ = $cdat_evt = $cdat_rdg = $cdat_val = $cdat_unt = $cdat_dev; - $cdat_dev =~ s/.*DEVICE.varchar\(([\d]*)\).*/$1/e; - $cdat_typ =~ s/.*TYPE.varchar\(([\d]*)\).*/$1/e; - $cdat_evt =~ s/.*EVENT.varchar\(([\d]*)\).*/$1/e; - $cdat_rdg =~ s/.*READING.varchar\(([\d]*)\).*/$1/e; - $cdat_val =~ s/.*VALUE.varchar\(([\d]*)\).*/$1/e; - $cdat_unt =~ s/.*UNIT.varchar\(([\d]*)\).*/$1/e; - } - if ($dbmodel !~ /SQLITE/) { - $cdat_dev = @sr_dev?($sr_dev[1]):"no result"; - $cdat_dev =~ tr/varchar\(|\)//d if($cdat_dev ne "no result"); - $cdat_typ = @sr_typ?($sr_typ[1]):"no result"; - $cdat_typ =~ tr/varchar\(|\)//d if($cdat_typ ne "no result"); - $cdat_evt = @sr_evt?($sr_evt[1]):"no result"; - $cdat_evt =~ tr/varchar\(|\)//d if($cdat_evt ne "no result"); - $cdat_rdg = @sr_rdg?($sr_rdg[1]):"no result"; - $cdat_rdg =~ tr/varchar\(|\)//d if($cdat_rdg ne "no result"); - $cdat_val = @sr_val?($sr_val[1]):"no result"; - $cdat_val =~ tr/varchar\(|\)//d if($cdat_val ne "no result"); - $cdat_unt = @sr_unt?($sr_unt[1]):"no result"; - $cdat_unt =~ tr/varchar\(|\)//d if($cdat_unt ne "no result"); - } - $cmod_dev = $hash->{HELPER}{DEVICECOL}; - $cmod_typ = $hash->{HELPER}{TYPECOL}; - $cmod_evt = $hash->{HELPER}{EVENTCOL}; - $cmod_rdg = $hash->{HELPER}{READINGCOL}; - $cmod_val = $hash->{HELPER}{VALUECOL}; - $cmod_unt = $hash->{HELPER}{UNITCOL}; - - if($cdat_dev >= $cmod_dev && $cdat_typ >= $cmod_typ && $cdat_evt >= $cmod_evt && $cdat_rdg >= $cmod_rdg && $cdat_val >= $cmod_val && $cdat_unt >= $cmod_unt) { - $rec = "settings o.k."; - } else { - if ($dbmodel !~ /SQLITE/) { - $rec = "The relation between column width in table current and the field width used in device $name don't meet the requirements. "; - $rec .= "Please make sure that the width of database field definition is equal or larger than the field width used by the module. Compare the given results.
"; - $rec .= "Currently the default values for field width are:

"; - $rec .= "DEVICE: $columns{DEVICE}
"; - $rec .= "TYPE: $columns{TYPE}
"; - $rec .= "EVENT: $columns{EVENT}
"; - $rec .= "READING: $columns{READING}
"; - $rec .= "VALUE: $columns{VALUE}
"; - $rec .= "UNIT: $columns{UNIT}

"; - $rec .= "You can change the column width in database by a statement like 'alter table current modify VALUE varchar(128);' (example for changing field 'VALUE'). "; - $rec .= "You can do it for example by executing 'sqlCmd' in DbRep or in a SQL-Editor of your choice. (switch $name to asynchron mode for non-blocking).
"; - $rec .= "Alternatively the field width used by $name can be adjusted by setting attributes 'colEvent', 'colReading', 'colValue'. (pls. refer to commandref)"; - } else { - $rec = "WARNING - The relation between column width in table current and the field width used by device $name should be equal but it differs. "; - $rec .= "The field width used by $name can be adjusted by setting attributes 'colEvent', 'colReading', 'colValue'. (pls. refer to commandref)"; - $rec .= "Because you use SQLite this is only a warning. Normally the database can handle these differences. "; - } - } - - $check .= "Result of table 'current' check

"; - $check .= "Column width set in DB $dbname.current: 'DEVICE' = $cdat_dev, 'TYPE' = $cdat_typ, 'EVENT' = $cdat_evt, 'READING' = $cdat_rdg, 'VALUE' = $cdat_val, 'UNIT' = $cdat_unt
"; - $check .= "Column width used by $name: 'DEVICE' = $cmod_dev, 'TYPE' = $cmod_typ, 'EVENT' = $cmod_evt, 'READING' = $cmod_rdg, 'VALUE' = $cmod_val, 'UNIT' = $cmod_unt
"; - $check .= "Recommendation: $rec

"; -#} - - ### Check Vorhandensein Search_Idx mit den empfohlenen Spalten - ####################################################################### - my (@six,@six_dev,@six_rdg,@six_tsp); - my ($idef,$idef_dev,$idef_rdg,$idef_tsp); - $check .= "Result of check 'Search_Idx' availability

"; - - if($dbmodel =~ /MYSQL/) { - @six = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Search_Idx'"); - if (!@six) { - $check .= "The index 'Search_Idx' is missing.
"; - $rec = "You can create the index by executing statement 'CREATE INDEX Search_Idx ON `history` (DEVICE, READING, TIMESTAMP) USING BTREE;'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - $rec .= "Please make sure the device '$name' is operating in asynchronous mode to avoid FHEM from blocking when creating the index.
"; - $rec .= "Note: If you have just created another index which covers the same fields and order as suggested (e.g. a primary key) you don't need to create the 'Search_Idx' as well !
"; - } else { - @six_dev = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Search_Idx' and Column_name='DEVICE'"); - @six_rdg = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Search_Idx' and Column_name='READING'"); - @six_tsp = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Search_Idx' and Column_name='TIMESTAMP'"); - if (@six_dev && @six_rdg && @six_tsp) { - $check .= "Index 'Search_Idx' exists and contains recommended fields 'DEVICE', 'READING', 'TIMESTAMP'.
"; - $rec = "settings o.k."; - } else { - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'DEVICE'.
" if (!@six_dev); - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'READING'.
" if (!@six_rdg); - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'TIMESTAMP'.
" if (!@six_tsp); - $rec = "The index should contain the fields 'DEVICE', 'READING', 'TIMESTAMP'. "; - $rec .= "You can change the index by executing e.g.
"; - $rec .= "'ALTER TABLE `history` DROP INDEX `Search_Idx`, ADD INDEX `Search_Idx` (`DEVICE`, `READING`, `TIMESTAMP`) USING BTREE;'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - } - } - } - if($dbmodel =~ /POSTGRESQL/) { - @six = DbLog_sqlget($hash,"SELECT * FROM pg_indexes WHERE tablename='history' and indexname ='Search_Idx'"); - if (!@six) { - $check .= "The index 'Search_Idx' is missing.
"; - $rec = "You can create the index by executing statement 'CREATE INDEX \"Search_Idx\" ON history USING btree (device, reading, \"timestamp\")'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - $rec .= "Please make sure the device '$name' is operating in asynchronous mode to avoid FHEM from blocking when creating the index.
"; - $rec .= "Note: If you have just created another index which covers the same fields and order as suggested (e.g. a primary key) you don't need to create the 'Search_Idx' as well !
"; - } else { - $idef = $six[4]; - $idef_dev = 1 if($idef =~ /device/); - $idef_rdg = 1 if($idef =~ /reading/); - $idef_tsp = 1 if($idef =~ /timestamp/); - if ($idef_dev && $idef_rdg && $idef_tsp) { - $check .= "Index 'Search_Idx' exists and contains recommended fields 'DEVICE', 'READING', 'TIMESTAMP'.
"; - $rec = "settings o.k."; - } else { - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'DEVICE'.
" if (!$idef_dev); - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'READING'.
" if (!$idef_rdg); - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'TIMESTAMP'.
" if (!$idef_tsp); - $rec = "The index should contain the fields 'DEVICE', 'READING', 'TIMESTAMP'. "; - $rec .= "You can change the index by executing e.g.
"; - $rec .= "'DROP INDEX \"Search_Idx\"; CREATE INDEX \"Search_Idx\" ON history USING btree (device, reading, \"timestamp\")'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - } - } - } - if($dbmodel =~ /SQLITE/) { - @six = DbLog_sqlget($hash,"SELECT name,sql FROM sqlite_master WHERE type='index' AND name='Search_Idx'"); - if (!$six[0]) { - $check .= "The index 'Search_Idx' is missing.
"; - $rec = "You can create the index by executing statement 'CREATE INDEX Search_Idx ON `history` (DEVICE, READING, TIMESTAMP)'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - $rec .= "Please make sure the device '$name' is operating in asynchronous mode to avoid FHEM from blocking when creating the index.
"; - $rec .= "Note: If you have just created another index which covers the same fields and order as suggested (e.g. a primary key) you don't need to create the 'Search_Idx' as well !
"; - } else { - $idef = $six[1]; - $idef_dev = 1 if(lc($idef) =~ /device/); - $idef_rdg = 1 if(lc($idef) =~ /reading/); - $idef_tsp = 1 if(lc($idef) =~ /timestamp/); - if ($idef_dev && $idef_rdg && $idef_tsp) { - $check .= "Index 'Search_Idx' exists and contains recommended fields 'DEVICE', 'READING', 'TIMESTAMP'.
"; - $rec = "settings o.k."; - } else { - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'DEVICE'.
" if (!$idef_dev); - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'READING'.
" if (!$idef_rdg); - $check .= "Index 'Search_Idx' exists but doesn't contain recommended field 'TIMESTAMP'.
" if (!$idef_tsp); - $rec = "The index should contain the fields 'DEVICE', 'READING', 'TIMESTAMP'. "; - $rec .= "You can change the index by executing e.g.
"; - $rec .= "'DROP INDEX \"Search_Idx\"; CREATE INDEX Search_Idx ON `history` (DEVICE, READING, TIMESTAMP)'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - } - } - } - - $check .= "Recommendation: $rec

"; - - ### Check Index Report_Idx für DbRep-Device falls DbRep verwendet wird - ####################################################################### - my ($dbrp,$irep,); - my (@dix,@dix_rdg,@dix_tsp,$irep_rdg,$irep_tsp); - my $isused = 0; - my @repdvs = devspec2array("TYPE=DbRep"); - $check .= "Result of check 'Report_Idx' availability for DbRep-devices

"; - - foreach (@repdvs) { - $dbrp = $_; - if(!$defs{$dbrp}) { - Log3 ($name, 2, "DbLog $name -> Device '$dbrp' found by configCheck doesn't exist !"); - next; - } - if ($defs{$dbrp}->{DEF} eq $name) { - # DbRep Device verwendet aktuelles DbLog-Device - Log3 ($name, 5, "DbLog $name -> DbRep-Device '$dbrp' uses $name."); - $isused = 1; - } - } - if ($isused) { - if($dbmodel =~ /MYSQL/) { - @dix = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Report_Idx'"); - if (!@dix) { - $check .= "At least one DbRep-device assigned to $name is used, but the recommended index 'Report_Idx' is missing.
"; - $rec = "You can create the index by executing statement 'CREATE INDEX Report_Idx ON `history` (TIMESTAMP, READING) USING BTREE;'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - $rec .= "Please make sure the device '$name' is operating in asynchronous mode to avoid FHEM from blocking when creating the index.
"; - $rec .= "Note: If you have just created another index which covers the same fields and order as suggested (e.g. a primary key) you don't need to create the 'Report_Idx' as well !
"; - } else { - @dix_rdg = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Report_Idx' and Column_name='READING'"); - @dix_tsp = DbLog_sqlget($hash,"SHOW INDEX FROM history where Key_name='Report_Idx' and Column_name='TIMESTAMP'"); - if (@dix_rdg && @dix_tsp) { - $check .= "At least one DbRep-device assigned to $name is used. "; - $check .= "Index 'Report_Idx' exists and contains recommended fields 'TIMESTAMP', 'READING'.
"; - $rec = "settings o.k."; - } else { - $check .= "You use at least one DbRep-device assigned to $name. "; - $check .= "Index 'Report_Idx' exists but doesn't contain recommended field 'READING'.
" if (!@dix_rdg); - $check .= "Index 'Report_Idx' exists but doesn't contain recommended field 'TIMESTAMP'.
" if (!@dix_tsp); - $rec = "The index should contain the fields 'TIMESTAMP', 'READING'. "; - $rec .= "You can change the index by executing e.g.
"; - $rec .= "'ALTER TABLE `history` DROP INDEX `Report_Idx`, ADD INDEX `Report_Idx` (`TIMESTAMP`, `READING`) USING BTREE'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - } - } - } - if($dbmodel =~ /POSTGRESQL/) { - @dix = DbLog_sqlget($hash,"SELECT * FROM pg_indexes WHERE tablename='history' and indexname ='Report_Idx'"); - if (!@dix) { - $check .= "You use at least one DbRep-device assigned to $name, but the recommended index 'Report_Idx' is missing.
"; - $rec = "You can create the index by executing statement 'CREATE INDEX \"Report_Idx\" ON history USING btree (\"timestamp\", reading)'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - $rec .= "Please make sure the device '$name' is operating in asynchronous mode to avoid FHEM from blocking when creating the index.
"; - $rec .= "Note: If you have just created another index which covers the same fields and order as suggested (e.g. a primary key) you don't need to create the 'Report_Idx' as well !
"; - } else { - $irep = $dix[4]; - $irep_rdg = 1 if($irep =~ /reading/); - $irep_tsp = 1 if($irep =~ /timestamp/); - if ($irep_rdg && $irep_tsp) { - $check .= "Index 'Report_Idx' exists and contains recommended fields 'TIMESTAMP', 'READING'.
"; - $rec = "settings o.k."; - } else { - $check .= "Index 'Report_Idx' exists but doesn't contain recommended field 'READING'.
" if (!$irep_rdg); - $check .= "Index 'Report_Idx' exists but doesn't contain recommended field 'TIMESTAMP'.
" if (!$irep_tsp); - $rec = "The index should contain the fields 'TIMESTAMP', 'READING'. "; - $rec .= "You can change the index by executing e.g.
"; - $rec .= "'DROP INDEX \"Report_Idx\"; CREATE INDEX \"Report_Idx\" ON history USING btree (\"timestamp\", reading)'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - } - } - } - if($dbmodel =~ /SQLITE/) { - @dix = DbLog_sqlget($hash,"SELECT name,sql FROM sqlite_master WHERE type='index' AND name='Report_Idx'"); - if (!$dix[0]) { - $check .= "The index 'Report_Idx' is missing.
"; - $rec = "You can create the index by executing statement 'CREATE INDEX Report_Idx ON `history` (TIMESTAMP, READING)'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - $rec .= "Please make sure the device '$name' is operating in asynchronous mode to avoid FHEM from blocking when creating the index.
"; - $rec .= "Note: If you have just created another index which covers the same fields and order as suggested (e.g. a primary key) you don't need to create the 'Search_Idx' as well !
"; - } else { - $irep = $dix[1]; - $irep_rdg = 1 if(lc($irep) =~ /reading/); - $irep_tsp = 1 if(lc($irep) =~ /timestamp/); - if ($irep_rdg && $irep_tsp) { - $check .= "Index 'Report_Idx' exists and contains recommended fields 'TIMESTAMP', 'READING'.
"; - $rec = "settings o.k."; - } else { - $check .= "Index 'Report_Idx' exists but doesn't contain recommended field 'READING'.
" if (!$irep_rdg); - $check .= "Index 'Report_Idx' exists but doesn't contain recommended field 'TIMESTAMP'.
" if (!$irep_tsp); - $rec = "The index should contain the fields 'TIMESTAMP', 'READING'. "; - $rec .= "You can change the index by executing e.g.
"; - $rec .= "'DROP INDEX \"Report_Idx\"; CREATE INDEX Report_Idx ON `history` (TIMESTAMP, READING)'
"; - $rec .= "Depending on your database size this command may running a long time.
"; - } - } - } - } else { - $check .= "No DbRep-device assigned to $name is used. Hence an index for DbRep isn't needed.
"; - $rec = "settings o.k."; - } - $check .= "Recommendation: $rec

"; - - $check .= ""; - -return $check; -} - -sub DbLog_sqlget($$) { - my ($hash,$sql)= @_; - my $name = $hash->{NAME}; - my ($dbh,$sth,@sr); - - Log3 ($name, 4, "DbLog $name - Executing SQL: $sql"); - - $dbh = DbLog_ConnectNewDBH($hash); - return if(!$dbh); - - eval { $sth = $dbh->prepare("$sql"); - $sth->execute; - }; - if($@) { - $dbh->disconnect if($dbh); - Log3 ($name, 2, "DbLog $name - $@"); - return @sr; - } - - @sr = $sth->fetchrow; - - $sth->finish; - $dbh->disconnect; - no warnings 'uninitialized'; - Log3 ($name, 4, "DbLog $name - SQL result: @sr"); - use warnings; - -return @sr; -} - -######################################################################################### -# -# Addlog - einfügen des Readingwertes eines gegebenen Devices -# -######################################################################################### -sub DbLog_AddLog($$$$$) { - my ($hash,$devrdspec,$value,$nce,$cn)= @_; - my $name = $hash->{NAME}; - my $async = AttrVal($name, "asyncMode", undef); - my $value_fn = AttrVal( $name, "valueFn", "" ); - my $ce = AttrVal($name, "cacheEvents", 0); - my ($dev_type,$dev_name,$dev_reading,$read_val,$event,$ut); - my @row_array; - my $ts; - - return if(IsDisabled($name) || !$hash->{HELPER}{COLSET} || $init_done != 1); - - # Funktion aus Attr valueFn validieren - if( $value_fn =~ m/^\s*(\{.*\})\s*$/s ) { - $value_fn = $1; - } else { - $value_fn = ''; - } - - my $now = gettimeofday(); - - my $rdspec = (split ":",$devrdspec)[-1]; - my @dc = split(":",$devrdspec); - pop @dc; - my $devspec = join(':',@dc); - - my @exdvs = devspec2array($devspec); - Log3 $name, 4, "DbLog $name -> Addlog known devices by devspec: @exdvs"; - foreach (@exdvs) { - $dev_name = $_; - if(!$defs{$dev_name}) { - Log3 $name, 2, "DbLog $name -> Device '$dev_name' used by addLog doesn't exist !"; - next; - } - - my $r = $defs{$dev_name}{READINGS}; - my $DbLogExclude = AttrVal($dev_name, "DbLogExclude", undef); - my @exrds; - my $found = 0; - foreach my $rd (sort keys %{$r}) { - # jedes Reading des Devices auswerten - my $do = 1; - $found = 1 if($rd =~ m/^$rdspec$/); # Reading gefunden - if($DbLogExclude && !$nce) { - my @v1 = split(/,/, $DbLogExclude); - for (my $i=0; $i Device: \"$dev_name\", reading: \"$v2[0]\" excluded by attribute DbLogExclude from addLog !" if($rd =~ m/^$rdspec$/); - $do = 0; - } - } - } - next if(!$do); - push @exrds,$rd if($rd =~ m/^$rdspec$/); - } - Log3 $name, 4, "DbLog $name -> Readings extracted from Regex: @exrds"; - - if(!$found) { - if(goodReadingName($rdspec) && defined($value)) { - Log3 $name, 3, "DbLog $name -> Reading '$rdspec' of device '$dev_name' not found - add it as new reading."; - push @exrds,$rdspec; - } elsif (goodReadingName($rdspec) && !defined($value)) { - Log3 $name, 2, "DbLog $name -> WARNING - new Reading '$rdspec' has no value - can't add it !"; - } else { - Log3 $name, 2, "DbLog $name -> WARNING - Readingname '$rdspec' is no valid or regexp - can't add regexp as new reading !"; - } - } - - no warnings 'uninitialized'; - foreach (@exrds) { - $dev_reading = $_; - $read_val = $value ne ""?$value:ReadingsVal($dev_name,$dev_reading,""); - $dev_type = uc($defs{$dev_name}{TYPE}); - - # dummy-Event zusammenstellen - $event = $dev_reading.": ".$read_val; - - # den zusammengestellten Event parsen lassen (evtl. Unit zuweisen) - my @r = DbLog_ParseEvent($dev_name, $dev_type, $event); - $dev_reading = $r[0]; - $read_val = $r[1]; - $ut = $r[2]; - if(!defined $dev_reading) {$dev_reading = "";} - if(!defined $read_val) {$read_val = "";} - if(!defined $ut || $ut eq "") {$ut = AttrVal("$dev_name", "unit", "");} - $event = "addLog"; - - $defs{$dev_name}{Helper}{DBLOG}{$dev_reading}{$hash->{NAME}}{TIME} = $now; - $defs{$dev_name}{Helper}{DBLOG}{$dev_reading}{$hash->{NAME}}{VALUE} = $read_val; - $ts = TimeNow(); - # Anwender spezifische Funktion anwenden - if($value_fn ne '') { - my $TIMESTAMP = $ts; - my $DEVICE = $dev_name; - my $DEVICETYPE = $dev_type; - my $EVENT = $event; - my $READING = $dev_reading; - my $VALUE = $read_val; - my $UNIT = $ut; - my $IGNORE = 0; - my $CN = $cn?$cn:""; - - eval $value_fn; - Log3 $name, 2, "DbLog $name -> error valueFn: ".$@ if($@); - next if($IGNORE); # aktueller Event wird nicht geloggt wenn $IGNORE=1 gesetzt in $value_fn - - $ts = $TIMESTAMP if($TIMESTAMP =~ /^(\d{4})-(\d{2})-(\d{2} \d{2}):(\d{2}):(\d{2})$/); - $dev_name = $DEVICE if($DEVICE ne ''); - $dev_type = $DEVICETYPE if($DEVICETYPE ne ''); - $dev_reading = $READING if($READING ne ''); - $read_val = $VALUE if(defined $VALUE); - $ut = $UNIT if(defined $UNIT); - } - - # Daten auf maximale Länge beschneiden - ($dev_name,$dev_type,$event,$dev_reading,$read_val,$ut) = DbLog_cutCol($hash,$dev_name,$dev_type,$event,$dev_reading,$read_val,$ut); - - if(AttrVal($name, "useCharfilter",0)) { - $dev_reading = DbLog_charfilter($dev_reading); - $read_val = DbLog_charfilter($read_val); - } - - my $row = ($ts."|".$dev_name."|".$dev_type."|".$event."|".$dev_reading."|".$read_val."|".$ut); - Log3 $hash->{NAME}, 3, "DbLog $name -> addLog created - TS: $ts, Device: $dev_name, Type: $dev_type, Event: $event, Reading: $dev_reading, Value: $read_val, Unit: $ut" - if(!AttrVal($name, "suppressAddLogV3",0)); - - if($async) { - # asynchoner non-blocking Mode - # Cache & CacheIndex für Events zum asynchronen Schreiben in DB - $hash->{cache}{index}++; - my $index = $hash->{cache}{index}; - $hash->{cache}{".memcache"}{$index} = $row; - my $memcount = $hash->{cache}{".memcache"}?scalar(keys%{$hash->{cache}{".memcache"}}):0; - if($ce == 1) { - readingsSingleUpdate($hash, "CacheUsage", $memcount, 1); - } else { - readingsSingleUpdate($hash, 'CacheUsage', $memcount, 0); - } - } else { - # synchoner Mode - push(@row_array, $row); - } - } - use warnings; - } - if(!$async) { - if(@row_array) { - # synchoner Mode - # return wenn "reopen" mit Ablaufzeit gestartet ist - return if($hash->{HELPER}{REOPEN_RUNS}); - my $error = DbLog_Push($hash, 1, @row_array); - - my $state = $error?$error:(IsDisabled($name))?"disabled":"connected"; - my $evt = ($state eq $hash->{HELPER}{OLDSTATE})?0:1; - readingsSingleUpdate($hash, "state", $state, $evt); - $hash->{HELPER}{OLDSTATE} = $state; - - Log3 $name, 5, "DbLog $name -> DbLog_Push Returncode: $error"; - } - } -return; -} - -######################################################################################### -# -# Subroutine addCacheLine - einen Datensatz zum Cache hinzufügen -# -######################################################################################### -sub DbLog_addCacheLine($$$$$$$$) { - my ($hash,$i_timestamp,$i_dev,$i_type,$i_evt,$i_reading,$i_val,$i_unit) = @_; - my $name = $hash->{NAME}; - my $ce = AttrVal($name, "cacheEvents", 0); - my $value_fn = AttrVal( $name, "valueFn", "" ); - - # Funktion aus Attr valueFn validieren - if( $value_fn =~ m/^\s*(\{.*\})\s*$/s ) { - $value_fn = $1; - } else { - $value_fn = ''; - } - if($value_fn ne '') { - my $TIMESTAMP = $i_timestamp; - my $DEVICE = $i_dev; - my $DEVICETYPE = $i_type; - my $EVENT = $i_evt; - my $READING = $i_reading; - my $VALUE = $i_val; - my $UNIT = $i_unit; - my $IGNORE = 0; - - eval $value_fn; - Log3 $name, 2, "DbLog $name -> error valueFn: ".$@ if($@); - if($IGNORE) { - # aktueller Event wird nicht geloggt wenn $IGNORE=1 gesetzt in $value_fn - Log3 $hash->{NAME}, 4, "DbLog $name -> Event ignored by valueFn - TS: $i_timestamp, Device: $i_dev, Type: $i_type, Event: $i_evt, Reading: $i_reading, Value: $i_val, Unit: $i_unit"; - next; - } - - $i_timestamp = $TIMESTAMP if($TIMESTAMP =~ /(19[0-9][0-9]|2[0-9][0-9][0-9])-(0[1-9]|1[1-2])-(0[1-9]|1[0-9]|2[0-9]|3[0-1]) (0[0-9]|1[1-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])/); - $i_dev = $DEVICE if($DEVICE ne ''); - $i_type = $DEVICETYPE if($DEVICETYPE ne ''); - $i_reading = $READING if($READING ne ''); - $i_val = $VALUE if(defined $VALUE); - $i_unit = $UNIT if(defined $UNIT); - } - - no warnings 'uninitialized'; - # Daten auf maximale Länge beschneiden - ($i_dev,$i_type,$i_evt,$i_reading,$i_val,$i_unit) = DbLog_cutCol($hash,$i_dev,$i_type,$i_evt,$i_reading,$i_val,$i_unit); - - my $row = ($i_timestamp."|".$i_dev."|".$i_type."|".$i_evt."|".$i_reading."|".$i_val."|".$i_unit); - $row = DbLog_charfilter($row) if(AttrVal($name, "useCharfilter",0)); - Log3 $hash->{NAME}, 3, "DbLog $name -> added by addCacheLine - TS: $i_timestamp, Device: $i_dev, Type: $i_type, Event: $i_evt, Reading: $i_reading, Value: $i_val, Unit: $i_unit"; - use warnings; - - eval { # one transaction - $hash->{cache}{index}++; - my $index = $hash->{cache}{index}; - $hash->{cache}{".memcache"}{$index} = $row; - - my $memcount = $hash->{cache}{".memcache"}?scalar(keys%{$hash->{cache}{".memcache"}}):0; - if($ce == 1) { - readingsSingleUpdate($hash, "CacheUsage", $memcount, 1); - } else { - readingsSingleUpdate($hash, 'CacheUsage', $memcount, 0); - } - }; - -return; -} - - -######################################################################################### -# -# Subroutine cutCol - Daten auf maximale Länge beschneiden -# -######################################################################################### -sub DbLog_cutCol($$$$$$$) { - my ($hash,$dn,$dt,$evt,$rd,$val,$unit)= @_; - my $name = $hash->{NAME}; - my $colevent = AttrVal($name, 'colEvent', undef); - my $colreading = AttrVal($name, 'colReading', undef); - my $colvalue = AttrVal($name, 'colValue', undef); - - if ($hash->{MODEL} ne 'SQLITE' || defined($colevent) || defined($colreading) || defined($colvalue) ) { - $dn = substr($dn,0, $hash->{HELPER}{DEVICECOL}); - $dt = substr($dt,0, $hash->{HELPER}{TYPECOL}); - $evt = substr($evt,0, $hash->{HELPER}{EVENTCOL}); - $rd = substr($rd,0, $hash->{HELPER}{READINGCOL}); - $val = substr($val,0, $hash->{HELPER}{VALUECOL}); - $unit = substr($unit,0, $hash->{HELPER}{UNITCOL}) if($unit); - } -return ($dn,$dt,$evt,$rd,$val,$unit); -} - -############################################################################### -# liefert zurück ob Autocommit ($useac) bzw. Transaktion ($useta) -# verwendet werden soll -# -# basic_ta:on - Autocommit Servereinstellung / Transaktion ein -# basic_ta:off - Autocommit Servereinstellung / Transaktion aus -# ac:on_ta:on - Autocommit ein / Transaktion ein -# ac:on_ta:off - Autocommit ein / Transaktion aus -# ac:off_ta:on - Autocommit aus / Transaktion ein (AC aus impliziert TA ein) -# -# Autocommit: 0/1/2 = aus/ein/Servereinstellung -# Transaktion: 0/1 = aus/ein -############################################################################### -sub DbLog_commitMode ($) { - my ($hash) = @_; - my $name = $hash->{NAME}; - my $useac = 2; # default Servereinstellung - my $useta = 1; # default Transaktion ein - - my $cm = AttrVal($name, "commitMode", "basic_ta:on"); - my ($ac,$ta) = split("_",$cm); - $useac = ($ac =~ /off/)?0:($ac =~ /on/)?1:2; - $useta = 0 if($ta =~ /off/); - -return($useac,$useta); -} - -############################################################################### -# Zeichen von Feldevents filtern -############################################################################### -sub DbLog_charfilter ($) { - my ($txt) = @_; - my ($p,$a); - - # nur erwünschte Zeichen ASCII %d32-126 und Sonderzeichen - $txt =~ s/ß/ss/g; - $txt =~ s/ä/ae/g; - $txt =~ s/ö/oe/g; - $txt =~ s/ü/ue/g; - $txt =~ s/Ä/Ae/g; - $txt =~ s/Ö/Oe/g; - $txt =~ s/Ü/Ue/g; - $txt =~ s/€/EUR/g; - $txt =~ s/\xb0/1degree1/g; - - $txt =~ tr/ A-Za-z0-9!"#$%&'()*+,-.\/:;<=>?@[\\]^_`{|}~//cd; - - $txt =~ s/1degree1/°/g; - -return($txt); -} - -######################################################################################### -### DBLog - Historische Werte ausduennen (alte blockiernde Variante) > Forum #41089 -######################################################################################### -sub DbLog_reduceLog($@) { - my ($hash,@a) = @_; - my ($ret,$row,$err,$filter,$exclude,$c,$day,$hour,$lastHour,$updDate,$updHour,$average,$processingDay,$lastUpdH,%hourlyKnown,%averageHash,@excludeRegex,@dayRows,@averageUpd,@averageUpdD); - my ($name,$startTime,$currentHour,$currentDay,$deletedCount,$updateCount,$sum,$rowCount,$excludeCount) = ($hash->{NAME},time(),99,0,0,0,0,0,0); - my $dbh = DbLog_ConnectNewDBH($hash); - return if(!$dbh); - - if ($a[-1] =~ /^EXCLUDE=(.+:.+)+/i) { - ($filter) = $a[-1] =~ /^EXCLUDE=(.+)/i; - @excludeRegex = split(',',$filter); - } elsif ($a[-1] =~ /^INCLUDE=.+:.+$/i) { - $filter = 1; - } - if (defined($a[3])) { - $average = ($a[3] =~ /average=day/i) ? "AVERAGE=DAY" : ($a[3] =~ /average/i) ? "AVERAGE=HOUR" : 0; - } - Log3($name, 3, "DbLog $name: reduceLog requested with DAYS=$a[2]" - .(($average || $filter) ? ', ' : '').(($average) ? "$average" : '') - .(($average && $filter) ? ", " : '').(($filter) ? uc((split('=',$a[-1]))[0]).'='.(split('=',$a[-1]))[1] : '')); - - my ($useac,$useta) = DbLog_commitMode($hash); - my $ac = ($dbh->{AutoCommit})?"ON":"OFF"; - my $tm = ($useta)?"ON":"OFF"; - Log3 $hash->{NAME}, 4, "DbLog $name -> AutoCommit mode: $ac, Transaction mode: $tm"; - - my ($od,$nd) = split(":",$a[2]); # $od - Tage älter als , $nd - Tage neuer als - my ($ots,$nts); - if ($hash->{MODEL} eq 'SQLITE') { - $ots = "datetime('now', '-$od days')"; - $nts = "datetime('now', '-$nd days')" if($nd); - } elsif ($hash->{MODEL} eq 'MYSQL') { - $ots = "DATE_SUB(CURDATE(),INTERVAL $od DAY)"; - $nts = "DATE_SUB(CURDATE(),INTERVAL $nd DAY)" if($nd); - } elsif ($hash->{MODEL} eq 'POSTGRESQL') { - $ots = "NOW() - INTERVAL '$od' DAY"; - $nts = "NOW() - INTERVAL '$nd' DAY" if($nd); - } else { - $ret = 'Unknown database type.'; - } - - if ($ots) { - my ($sth_del, $sth_upd, $sth_delD, $sth_updD, $sth_get); - eval { $sth_del = $dbh->prepare_cached("DELETE FROM history WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?) AND (VALUE=?)"); - $sth_upd = $dbh->prepare_cached("UPDATE history SET TIMESTAMP=?, EVENT=?, VALUE=? WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?) AND (VALUE=?)"); - $sth_delD = $dbh->prepare_cached("DELETE FROM history WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?)"); - $sth_updD = $dbh->prepare_cached("UPDATE history SET TIMESTAMP=?, EVENT=?, VALUE=? WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?)"); - $sth_get = $dbh->prepare("SELECT TIMESTAMP,DEVICE,'',READING,VALUE FROM history WHERE " - .($a[-1] =~ /^INCLUDE=(.+):(.+)$/i ? "DEVICE like '$1' AND READING like '$2' AND " : '') - ."TIMESTAMP < $ots".($nts?" AND TIMESTAMP >= $nts ":" ")."ORDER BY TIMESTAMP ASC"); # '' was EVENT, no longer in use - }; - - $sth_get->execute(); - - do { - $row = $sth_get->fetchrow_arrayref || ['0000-00-00 00:00:00','D','','R','V']; # || execute last-day dummy - $ret = 1; - ($day,$hour) = $row->[0] =~ /-(\d{2})\s(\d{2}):/; - $rowCount++ if($day != 00); - if ($day != $currentDay) { - if ($currentDay) { # false on first executed day - if (scalar @dayRows) { - ($lastHour) = $dayRows[-1]->[0] =~ /(.*\d+\s\d{2}):/; - $c = 0; - for my $delRow (@dayRows) { - $c++ if($day != 00 || $delRow->[0] !~ /$lastHour/); - } - if($c) { - $deletedCount += $c; - Log3($name, 3, "DbLog $name: reduceLog deleting $c records of day: $processingDay"); - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - eval {$dbh->begin_work() if($dbh->{AutoCommit});}; - eval { - my $i = 0; - my $k = 1; - my $th = ($#dayRows <= 2000)?100:($#dayRows <= 30000)?1000:10000; - for my $delRow (@dayRows) { - if($day != 00 || $delRow->[0] !~ /$lastHour/) { - Log3($name, 5, "DbLog $name: DELETE FROM history WHERE (DEVICE=$delRow->[1]) AND (READING=$delRow->[3]) AND (TIMESTAMP=$delRow->[0]) AND (VALUE=$delRow->[4])"); - $sth_del->execute(($delRow->[1], $delRow->[3], $delRow->[0], $delRow->[4])); - $i++; - if($i == $th) { - my $prog = $k * $i; - Log3($name, 3, "DbLog $name: reduceLog deletion progress of day: $processingDay is: $prog"); - $i = 0; - $k++; - } - } - } - }; - if ($@) { - Log3($hash->{NAME}, 3, "DbLog $name: reduceLog ! FAILED ! for day $processingDay"); - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; - $ret = 0; - } else { - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - } - @dayRows = (); - } - - if ($ret && defined($a[3]) && $a[3] =~ /average/i) { - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - eval {$dbh->begin_work() if($dbh->{AutoCommit});}; - eval { - push(@averageUpd, {%hourlyKnown}) if($day != 00); - - $c = 0; - for my $hourHash (@averageUpd) { # Only count for logging... - for my $hourKey (keys %$hourHash) { - $c++ if ($hourHash->{$hourKey}->[0] && scalar(@{$hourHash->{$hourKey}->[4]}) > 1); - } - } - $updateCount += $c; - Log3($name, 3, "DbLog $name: reduceLog (hourly-average) updating $c records of day: $processingDay") if($c); # else only push to @averageUpdD - - my $i = 0; - my $k = 1; - my $th = ($c <= 2000)?100:($c <= 30000)?1000:10000; - for my $hourHash (@averageUpd) { - for my $hourKey (keys %$hourHash) { - if ($hourHash->{$hourKey}->[0]) { # true if reading is a number - ($updDate,$updHour) = $hourHash->{$hourKey}->[0] =~ /(.*\d+)\s(\d{2}):/; - if (scalar(@{$hourHash->{$hourKey}->[4]}) > 1) { # true if reading has multiple records this hour - for (@{$hourHash->{$hourKey}->[4]}) { $sum += $_; } - $average = sprintf('%.3f', $sum/scalar(@{$hourHash->{$hourKey}->[4]}) ); - $sum = 0; - Log3($name, 5, "DbLog $name: UPDATE history SET TIMESTAMP=$updDate $updHour:30:00, EVENT='rl_av_h', VALUE=$average WHERE DEVICE=$hourHash->{$hourKey}->[1] AND READING=$hourHash->{$hourKey}->[3] AND TIMESTAMP=$hourHash->{$hourKey}->[0] AND VALUE=$hourHash->{$hourKey}->[4]->[0]"); - $sth_upd->execute(("$updDate $updHour:30:00", 'rl_av_h', $average, $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $hourHash->{$hourKey}->[0], $hourHash->{$hourKey}->[4]->[0])); - - $i++; - if($i == $th) { - my $prog = $k * $i; - Log3($name, 3, "DbLog $name: reduceLog (hourly-average) updating progress of day: $processingDay is: $prog"); - $i = 0; - $k++; - } - push(@averageUpdD, ["$updDate $updHour:30:00", 'rl_av_h', $average, $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $updDate]) if (defined($a[3]) && $a[3] =~ /average=day/i); - } else { - push(@averageUpdD, [$hourHash->{$hourKey}->[0], $hourHash->{$hourKey}->[2], $hourHash->{$hourKey}->[4]->[0], $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $updDate]) if (defined($a[3]) && $a[3] =~ /average=day/i); - } - } - } - } - }; - if ($@) { - $err = $@; - Log3($hash->{NAME}, 2, "DbLog $name - reduceLogNbl ! FAILED ! for day $processingDay: $err"); - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; - @averageUpdD = (); - } else { - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - @averageUpd = (); - } - - if (defined($a[3]) && $a[3] =~ /average=day/i && scalar(@averageUpdD) && $day != 00) { - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - eval {$dbh->begin_work() if($dbh->{AutoCommit});}; - eval { - for (@averageUpdD) { - push(@{$averageHash{$_->[3].$_->[4]}->{tedr}}, [$_->[0], $_->[1], $_->[3], $_->[4]]); - $averageHash{$_->[3].$_->[4]}->{sum} += $_->[2]; - $averageHash{$_->[3].$_->[4]}->{date} = $_->[5]; - } - - $c = 0; - for (keys %averageHash) { - if(scalar @{$averageHash{$_}->{tedr}} == 1) { - delete $averageHash{$_}; - } else { - $c += (scalar(@{$averageHash{$_}->{tedr}}) - 1); - } - } - $deletedCount += $c; - $updateCount += keys(%averageHash); - - my ($id,$iu) = 0; - my ($kd,$ku) = 1; - my $thd = ($c <= 2000)?100:($c <= 30000)?1000:10000; - my $thu = ((keys %averageHash) <= 2000)?100:((keys %averageHash) <= 30000)?1000:10000; - Log3($name, 3, "DbLog $name: reduceLog (daily-average) updating ".(keys %averageHash).", deleting $c records of day: $processingDay") if(keys %averageHash); - for my $reading (keys %averageHash) { - $average = sprintf('%.3f', $averageHash{$reading}->{sum}/scalar(@{$averageHash{$reading}->{tedr}})); - $lastUpdH = pop @{$averageHash{$reading}->{tedr}}; - for (@{$averageHash{$reading}->{tedr}}) { - Log3($name, 5, "DbLog $name: DELETE FROM history WHERE DEVICE='$_->[2]' AND READING='$_->[3]' AND TIMESTAMP='$_->[0]'"); - $sth_delD->execute(($_->[2], $_->[3], $_->[0])); - - $id++; - if($id == $thd) { - my $prog = $kd * $id; - Log3($name, 3, "DbLog $name: reduceLog (daily-average) deleting progress of day: $processingDay is: $prog"); - $id = 0; - $kd++; - } - } - Log3($name, 5, "DbLog $name: UPDATE history SET TIMESTAMP=$averageHash{$reading}->{date} 12:00:00, EVENT='rl_av_d', VALUE=$average WHERE (DEVICE=$lastUpdH->[2]) AND (READING=$lastUpdH->[3]) AND (TIMESTAMP=$lastUpdH->[0])"); - $sth_updD->execute(($averageHash{$reading}->{date}." 12:00:00", 'rl_av_d', $average, $lastUpdH->[2], $lastUpdH->[3], $lastUpdH->[0])); - - $iu++; - if($iu == $thu) { - my $prog = $ku * $id; - Log3($name, 3, "DbLog $name: reduceLog (daily-average) updating progress of day: $processingDay is: $prog"); - $iu = 0; - $ku++; - } - } - }; - if ($@) { - $err = $@; - Log3($hash->{NAME}, 2, "DbLog $name - reduceLogNbl ! FAILED ! for day $processingDay: $err"); - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; - } else { - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - } - %averageHash = (); - %hourlyKnown = (); - @averageUpd = (); - @averageUpdD = (); - $currentHour = 99; - } - $currentDay = $day; - } - - if ($hour != $currentHour) { # forget records from last hour, but remember these for average - if (defined($a[3]) && $a[3] =~ /average/i && keys(%hourlyKnown)) { - push(@averageUpd, {%hourlyKnown}); - } - %hourlyKnown = (); - $currentHour = $hour; - } - if (defined $hourlyKnown{$row->[1].$row->[3]}) { # remember first readings for device per h, other can be deleted - push(@dayRows, [@$row]); - if (defined($a[3]) && $a[3] =~ /average/i && defined($row->[4]) && $row->[4] =~ /^-?(?:\d+(?:\.\d*)?|\.\d+)$/ && $hourlyKnown{$row->[1].$row->[3]}->[0]) { - if ($hourlyKnown{$row->[1].$row->[3]}->[0]) { - push(@{$hourlyKnown{$row->[1].$row->[3]}->[4]}, $row->[4]); - } - } - } else { - $exclude = 0; - for (@excludeRegex) { - $exclude = 1 if("$row->[1]:$row->[3]" =~ /^$_$/); - } - if ($exclude) { - $excludeCount++ if($day != 00); - } else { - $hourlyKnown{$row->[1].$row->[3]} = (defined($row->[4]) && $row->[4] =~ /^-?(?:\d+(?:\.\d*)?|\.\d+)$/) ? [$row->[0],$row->[1],$row->[2],$row->[3],[$row->[4]]] : [0]; - } - } - $processingDay = (split(' ',$row->[0]))[0]; - } while( $day != 00 ); - - my $result = "Rows processed: $rowCount, deleted: $deletedCount" - .((defined($a[3]) && $a[3] =~ /average/i)? ", updated: $updateCount" : '') - .(($excludeCount)? ", excluded: $excludeCount" : '') - .", time: ".sprintf('%.2f',time() - $startTime)."sec"; - Log3($name, 3, "DbLog $name: reduceLog executed. $result"); - readingsSingleUpdate($hash,"reduceLogState",$result,1); - $ret = "reduceLog executed. $result"; - } - $dbh->disconnect(); - return $ret; -} - -######################################################################################### -### DBLog - Historische Werte ausduennen non-blocking > Forum #41089 -######################################################################################### -sub DbLog_reduceLogNbl($) { - my ($name) = @_; - my $hash = $defs{$name}; - my $dbconn = $hash->{dbconn}; - my $dbuser = $hash->{dbuser}; - my $dbpassword = $attr{"sec$name"}{secret}; - my @a = @{$hash->{HELPER}{REDUCELOG}}; - my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; - delete $hash->{HELPER}{REDUCELOG}; - my ($ret,$row,$filter,$exclude,$c,$day,$hour,$lastHour,$updDate,$updHour,$average,$processingDay,$lastUpdH,%hourlyKnown,%averageHash,@excludeRegex,@dayRows,@averageUpd,@averageUpdD); - my ($startTime,$currentHour,$currentDay,$deletedCount,$updateCount,$sum,$rowCount,$excludeCount) = (time(),99,0,0,0,0,0,0); - my ($dbh,$err); - - Log3 ($name, 5, "DbLog $name -> Start DbLog_reduceLogNbl"); - - my ($useac,$useta) = DbLog_commitMode($hash); - if(!$useac) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 0 });}; - } elsif($useac == 1) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1 });}; - } else { - # Server default - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1 });}; - } - if ($@) { - $err = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_reduceLogNbl finished"); - return "$name|''|$err"; - } - - if ($a[-1] =~ /^EXCLUDE=(.+:.+)+/i) { - ($filter) = $a[-1] =~ /^EXCLUDE=(.+)/i; - @excludeRegex = split(',',$filter); - } elsif ($a[-1] =~ /^INCLUDE=.+:.+$/i) { - $filter = 1; - } - if (defined($a[3])) { - $average = ($a[3] =~ /average=day/i) ? "AVERAGE=DAY" : ($a[3] =~ /average/i) ? "AVERAGE=HOUR" : 0; - } - - Log3($name, 3, "DbLog $name: reduceLogNbl requested with DAYS=$a[2]" - .(($average || $filter) ? ', ' : '').(($average) ? "$average" : '') - .(($average && $filter) ? ", " : '').(($filter) ? uc((split('=',$a[-1]))[0]).'='.(split('=',$a[-1]))[1] : '')); - - my $ac = ($dbh->{AutoCommit})?"ON":"OFF"; - my $tm = ($useta)?"ON":"OFF"; - Log3 $hash->{NAME}, 4, "DbLog $name -> AutoCommit mode: $ac, Transaction mode: $tm"; - - my ($od,$nd) = split(":",$a[2]); # $od - Tage älter als , $nd - Tage neuer als - my ($ots,$nts); - if ($hash->{MODEL} eq 'SQLITE') { - $ots = "datetime('now', '-$od days')"; - $nts = "datetime('now', '-$nd days')" if($nd); - } elsif ($hash->{MODEL} eq 'MYSQL') { - $ots = "DATE_SUB(CURDATE(),INTERVAL $od DAY)"; - $nts = "DATE_SUB(CURDATE(),INTERVAL $nd DAY)" if($nd); - } elsif ($hash->{MODEL} eq 'POSTGRESQL') { - $ots = "NOW() - INTERVAL '$od' DAY"; - $nts = "NOW() - INTERVAL '$nd' DAY" if($nd); - } else { - $ret = 'Unknown database type.'; - } - - if ($ots) { - my ($sth_del, $sth_upd, $sth_delD, $sth_updD, $sth_get); - eval { $sth_del = $dbh->prepare_cached("DELETE FROM history WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?) AND (VALUE=?)"); - $sth_upd = $dbh->prepare_cached("UPDATE history SET TIMESTAMP=?, EVENT=?, VALUE=? WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?) AND (VALUE=?)"); - $sth_delD = $dbh->prepare_cached("DELETE FROM history WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?)"); - $sth_updD = $dbh->prepare_cached("UPDATE history SET TIMESTAMP=?, EVENT=?, VALUE=? WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?)"); - $sth_get = $dbh->prepare("SELECT TIMESTAMP,DEVICE,'',READING,VALUE FROM history WHERE " - .($a[-1] =~ /^INCLUDE=(.+):(.+)$/i ? "DEVICE like '$1' AND READING like '$2' AND " : '') - ."TIMESTAMP < $ots".($nts?" AND TIMESTAMP >= $nts ":" ")."ORDER BY TIMESTAMP ASC"); # '' was EVENT, no longer in use - }; - if ($@) { - $err = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_reduceLogNbl finished"); - return "$name|''|$err"; - } - - eval { $sth_get->execute(); }; - if ($@) { - $err = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_reduceLogNbl finished"); - return "$name|''|$err"; - } - - do { - $row = $sth_get->fetchrow_arrayref || ['0000-00-00 00:00:00','D','','R','V']; # || execute last-day dummy - $ret = 1; - ($day,$hour) = $row->[0] =~ /-(\d{2})\s(\d{2}):/; - $rowCount++ if($day != 00); - if ($day != $currentDay) { - if ($currentDay) { # false on first executed day - if (scalar @dayRows) { - ($lastHour) = $dayRows[-1]->[0] =~ /(.*\d+\s\d{2}):/; - $c = 0; - for my $delRow (@dayRows) { - $c++ if($day != 00 || $delRow->[0] !~ /$lastHour/); - } - if($c) { - $deletedCount += $c; - Log3($name, 3, "DbLog $name: reduceLogNbl deleting $c records of day: $processingDay"); - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - eval {$dbh->begin_work() if($dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - eval { - my $i = 0; - my $k = 1; - my $th = ($#dayRows <= 2000)?100:($#dayRows <= 30000)?1000:10000; - for my $delRow (@dayRows) { - if($day != 00 || $delRow->[0] !~ /$lastHour/) { - Log3($name, 4, "DbLog $name: DELETE FROM history WHERE (DEVICE=$delRow->[1]) AND (READING=$delRow->[3]) AND (TIMESTAMP=$delRow->[0]) AND (VALUE=$delRow->[4])"); - $sth_del->execute(($delRow->[1], $delRow->[3], $delRow->[0], $delRow->[4])); - $i++; - if($i == $th) { - my $prog = $k * $i; - Log3($name, 3, "DbLog $name: reduceLogNbl deletion progress of day: $processingDay is: $prog"); - $i = 0; - $k++; - } - } - } - }; - if ($@) { - $err = $@; - Log3($hash->{NAME}, 2, "DbLog $name - reduceLogNbl ! FAILED ! for day $processingDay: $err"); - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - $ret = 0; - } else { - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - } - @dayRows = (); - } - - if ($ret && defined($a[3]) && $a[3] =~ /average/i) { - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - eval {$dbh->begin_work() if($dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - eval { - push(@averageUpd, {%hourlyKnown}) if($day != 00); - - $c = 0; - for my $hourHash (@averageUpd) { # Only count for logging... - for my $hourKey (keys %$hourHash) { - $c++ if ($hourHash->{$hourKey}->[0] && scalar(@{$hourHash->{$hourKey}->[4]}) > 1); - } - } - $updateCount += $c; - Log3($name, 3, "DbLog $name: reduceLogNbl (hourly-average) updating $c records of day: $processingDay") if($c); # else only push to @averageUpdD - - my $i = 0; - my $k = 1; - my $th = ($c <= 2000)?100:($c <= 30000)?1000:10000; - for my $hourHash (@averageUpd) { - for my $hourKey (keys %$hourHash) { - if ($hourHash->{$hourKey}->[0]) { # true if reading is a number - ($updDate,$updHour) = $hourHash->{$hourKey}->[0] =~ /(.*\d+)\s(\d{2}):/; - if (scalar(@{$hourHash->{$hourKey}->[4]}) > 1) { # true if reading has multiple records this hour - for (@{$hourHash->{$hourKey}->[4]}) { $sum += $_; } - $average = sprintf('%.3f', $sum/scalar(@{$hourHash->{$hourKey}->[4]}) ); - $sum = 0; - Log3($name, 4, "DbLog $name: UPDATE history SET TIMESTAMP=$updDate $updHour:30:00, EVENT='rl_av_h', VALUE=$average WHERE DEVICE=$hourHash->{$hourKey}->[1] AND READING=$hourHash->{$hourKey}->[3] AND TIMESTAMP=$hourHash->{$hourKey}->[0] AND VALUE=$hourHash->{$hourKey}->[4]->[0]"); - $sth_upd->execute(("$updDate $updHour:30:00", 'rl_av_h', $average, $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $hourHash->{$hourKey}->[0], $hourHash->{$hourKey}->[4]->[0])); - - $i++; - if($i == $th) { - my $prog = $k * $i; - Log3($name, 3, "DbLog $name: reduceLogNbl (hourly-average) updating progress of day: $processingDay is: $prog"); - $i = 0; - $k++; - } - push(@averageUpdD, ["$updDate $updHour:30:00", 'rl_av_h', $average, $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $updDate]) if (defined($a[3]) && $a[3] =~ /average=day/i); - } else { - push(@averageUpdD, [$hourHash->{$hourKey}->[0], $hourHash->{$hourKey}->[2], $hourHash->{$hourKey}->[4]->[0], $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $updDate]) if (defined($a[3]) && $a[3] =~ /average=day/i); - } - } - } - } - }; - if ($@) { - $err = $@; - Log3($hash->{NAME}, 2, "DbLog $name - reduceLogNbl average=hour ! FAILED ! for day $processingDay: $err"); - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - @averageUpdD = (); - } else { - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - @averageUpd = (); - } - - if (defined($a[3]) && $a[3] =~ /average=day/i && scalar(@averageUpdD) && $day != 00) { - $dbh->{RaiseError} = 1; - $dbh->{PrintError} = 0; - eval {$dbh->begin_work() if($dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - eval { - for (@averageUpdD) { - push(@{$averageHash{$_->[3].$_->[4]}->{tedr}}, [$_->[0], $_->[1], $_->[3], $_->[4]]); - $averageHash{$_->[3].$_->[4]}->{sum} += $_->[2]; - $averageHash{$_->[3].$_->[4]}->{date} = $_->[5]; - } - - $c = 0; - for (keys %averageHash) { - if(scalar @{$averageHash{$_}->{tedr}} == 1) { - delete $averageHash{$_}; - } else { - $c += (scalar(@{$averageHash{$_}->{tedr}}) - 1); - } - } - $deletedCount += $c; - $updateCount += keys(%averageHash); - - my ($id,$iu) = 0; - my ($kd,$ku) = 1; - my $thd = ($c <= 2000)?100:($c <= 30000)?1000:10000; - my $thu = ((keys %averageHash) <= 2000)?100:((keys %averageHash) <= 30000)?1000:10000; - Log3($name, 3, "DbLog $name: reduceLogNbl (daily-average) updating ".(keys %averageHash).", deleting $c records of day: $processingDay") if(keys %averageHash); - for my $reading (keys %averageHash) { - $average = sprintf('%.3f', $averageHash{$reading}->{sum}/scalar(@{$averageHash{$reading}->{tedr}})); - $lastUpdH = pop @{$averageHash{$reading}->{tedr}}; - for (@{$averageHash{$reading}->{tedr}}) { - Log3($name, 5, "DbLog $name: DELETE FROM history WHERE DEVICE='$_->[2]' AND READING='$_->[3]' AND TIMESTAMP='$_->[0]'"); - $sth_delD->execute(($_->[2], $_->[3], $_->[0])); - - $id++; - if($id == $thd) { - my $prog = $kd * $id; - Log3($name, 3, "DbLog $name: reduceLogNbl (daily-average) deleting progress of day: $processingDay is: $prog"); - $id = 0; - $kd++; - } - } - Log3($name, 4, "DbLog $name: UPDATE history SET TIMESTAMP=$averageHash{$reading}->{date} 12:00:00, EVENT='rl_av_d', VALUE=$average WHERE (DEVICE=$lastUpdH->[2]) AND (READING=$lastUpdH->[3]) AND (TIMESTAMP=$lastUpdH->[0])"); - $sth_updD->execute(($averageHash{$reading}->{date}." 12:00:00", 'rl_av_d', $average, $lastUpdH->[2], $lastUpdH->[3], $lastUpdH->[0])); - - $iu++; - if($iu == $thu) { - my $prog = $ku * $id; - Log3($name, 3, "DbLog $name: reduceLogNbl (daily-average) updating progress of day: $processingDay is: $prog"); - $iu = 0; - $ku++; - } - } - }; - if ($@) { - Log3($hash->{NAME}, 3, "DbLog $name: reduceLogNbl average=day ! FAILED ! for day $processingDay"); - eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - } else { - eval {$dbh->commit() if(!$dbh->{AutoCommit});}; - if ($@) { - Log3 ($name, 2, "DbLog $name -> DbLog_reduceLogNbl - $@"); - } - } - $dbh->{RaiseError} = 0; - $dbh->{PrintError} = 1; - } - %averageHash = (); - %hourlyKnown = (); - @averageUpd = (); - @averageUpdD = (); - $currentHour = 99; - } - $currentDay = $day; - } - - if ($hour != $currentHour) { # forget records from last hour, but remember these for average - if (defined($a[3]) && $a[3] =~ /average/i && keys(%hourlyKnown)) { - push(@averageUpd, {%hourlyKnown}); - } - %hourlyKnown = (); - $currentHour = $hour; - } - if (defined $hourlyKnown{$row->[1].$row->[3]}) { # remember first readings for device per h, other can be deleted - push(@dayRows, [@$row]); - if (defined($a[3]) && $a[3] =~ /average/i && defined($row->[4]) && $row->[4] =~ /^-?(?:\d+(?:\.\d*)?|\.\d+)$/ && $hourlyKnown{$row->[1].$row->[3]}->[0]) { - if ($hourlyKnown{$row->[1].$row->[3]}->[0]) { - push(@{$hourlyKnown{$row->[1].$row->[3]}->[4]}, $row->[4]); - } - } - } else { - $exclude = 0; - for (@excludeRegex) { - $exclude = 1 if("$row->[1]:$row->[3]" =~ /^$_$/); - } - if ($exclude) { - $excludeCount++ if($day != 00); - } else { - $hourlyKnown{$row->[1].$row->[3]} = (defined($row->[4]) && $row->[4] =~ /^-?(?:\d+(?:\.\d*)?|\.\d+)$/) ? [$row->[0],$row->[1],$row->[2],$row->[3],[$row->[4]]] : [0]; - } - } - $processingDay = (split(' ',$row->[0]))[0]; - } while( $day != 00 ); - - my $result = "Rows processed: $rowCount, deleted: $deletedCount" - .((defined($a[3]) && $a[3] =~ /average/i)? ", updated: $updateCount" : '') - .(($excludeCount)? ", excluded: $excludeCount" : '') - .", time: ".sprintf('%.2f',time() - $startTime)."sec"; - Log3($name, 3, "DbLog $name: reduceLogNbl finished. $result"); - $ret = $result; - $ret = "reduceLogNbl finished. $result"; - } - - $dbh->disconnect(); - $ret = encode_base64($ret,""); - Log3 ($name, 5, "DbLog $name -> DbLog_reduceLogNbl finished"); - -return "$name|$ret|0"; -} - -######################################################################################### -# DBLog - reduceLogNbl non-blocking Rückkehrfunktion -######################################################################################### -sub DbLog_reduceLogNbl_finished($) { - my ($string) = @_; - my @a = split("\\|",$string); - my $name = $a[0]; - my $hash = $defs{$name}; - my $ret = decode_base64($a[1]); - my $err = decode_base64($a[2]) if ($a[2]); - - readingsSingleUpdate($hash,"reduceLogState",$err?$err:$ret,1); - delete $hash->{HELPER}{REDUCELOG_PID}; -return; -} - -######################################################################################### -# DBLog - count non-blocking -######################################################################################### -sub DbLog_countNbl($) { - my ($name) = @_; - my $hash = $defs{$name}; - my ($cc,$hc,$bst,$st,$rt); - - # Background-Startzeit - $bst = [gettimeofday]; - - my $dbh = DbLog_ConnectNewDBH($hash); - if (!$dbh) { - my $err = encode_base64("DbLog $name: DBLog_Set - count - DB connect not possible",""); - return "$name|0|0|$err|0"; - } else { - Log3 $name,4,"DbLog $name: Records count requested."; - # SQL-Startzeit - $st = [gettimeofday]; - $hc = $dbh->selectrow_array('SELECT count(*) FROM history'); - $cc = $dbh->selectrow_array('SELECT count(*) FROM current'); - $dbh->disconnect(); - # SQL-Laufzeit ermitteln - $rt = tv_interval($st); - } - - # Background-Laufzeit ermitteln - my $brt = tv_interval($bst); - $rt = $rt.",".$brt; -return "$name|$cc|$hc|0|$rt"; -} - -######################################################################################### -# DBLog - count non-blocking Rückkehrfunktion -######################################################################################### -sub DbLog_countNbl_finished($) -{ - my ($string) = @_; - my @a = split("\\|",$string); - my $name = $a[0]; - my $hash = $defs{$name}; - my $cc = $a[1]; - my $hc = $a[2]; - my $err = decode_base64($a[3]) if ($a[3]); - my $bt = $a[4] if($a[4]); - - readingsSingleUpdate($hash,"state",$err,1) if($err); - readingsSingleUpdate($hash,"countHistory",$hc,1) if ($hc); - readingsSingleUpdate($hash,"countCurrent",$cc,1) if ($cc); - - if(AttrVal($name, "showproctime", undef) && $bt) { - my ($rt,$brt) = split(",", $bt); - readingsBeginUpdate($hash); - readingsBulkUpdate($hash, "background_processing_time", sprintf("%.4f",$brt)); - readingsBulkUpdate($hash, "sql_processing_time", sprintf("%.4f",$rt)); - readingsEndUpdate($hash, 1); - } - delete $hash->{HELPER}{COUNT_PID}; -return; -} - -######################################################################################### -# DBLog - deleteOldDays non-blocking -######################################################################################### -sub DbLog_deldaysNbl($) { - my ($name) = @_; - my $hash = $defs{$name}; - my $dbconn = $hash->{dbconn}; - my $dbuser = $hash->{dbuser}; - my $dbpassword = $attr{"sec$name"}{secret}; - my $days = delete($hash->{HELPER}{DELDAYS}); - my ($cmd,$dbh,$rows,$error,$sth,$ret,$bst,$brt,$st,$rt); - - Log3 ($name, 5, "DbLog $name -> Start DbLog_deldaysNbl $days"); - - # Background-Startzeit - $bst = [gettimeofday]; - - my ($useac,$useta) = DbLog_commitMode($hash); - if(!$useac) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 0 });}; - } elsif($useac == 1) { - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1 });}; - } else { - # Server default - eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1 });}; - } - if ($@) { - $error = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name - Error: $@"); - Log3 ($name, 5, "DbLog $name -> DbLog_deldaysNbl finished"); - return "$name|0|0|$error"; - } - - my $ac = ($dbh->{AutoCommit})?"ON":"OFF"; - my $tm = ($useta)?"ON":"OFF"; - Log3 $hash->{NAME}, 4, "DbLog $name -> AutoCommit mode: $ac, Transaction mode: $tm"; - - $cmd = "delete from history where TIMESTAMP < "; - if ($hash->{MODEL} eq 'SQLITE') { - $cmd .= "datetime('now', '-$days days')"; - } elsif ($hash->{MODEL} eq 'MYSQL') { - $cmd .= "DATE_SUB(CURDATE(),INTERVAL $days DAY)"; - } elsif ($hash->{MODEL} eq 'POSTGRESQL') { - $cmd .= "NOW() - INTERVAL '$days' DAY"; - } else { - $ret = 'Unknown database type. Maybe you can try userCommand anyway.'; - $error = encode_base64($ret,""); - Log3 ($name, 2, "DbLog $name - Error: $ret"); - Log3 ($name, 5, "DbLog $name -> DbLog_deldaysNbl finished"); - return "$name|0|0|$error"; - } - - # SQL-Startzeit - $st = [gettimeofday]; - - eval { - $sth = $dbh->prepare($cmd); - $sth->execute(); - }; - - if ($@) { - $error = encode_base64($@,""); - Log3 ($name, 2, "DbLog $name - $@"); - $dbh->disconnect; - Log3 ($name, 4, "DbLog $name -> BlockingCall DbLog_deldaysNbl finished"); - return "$name|0|0|$error"; - } else { - $rows = $sth->rows; - $dbh->commit() if(!$dbh->{AutoCommit}); - $dbh->disconnect; - } - - # SQL-Laufzeit ermitteln - $rt = tv_interval($st); - - # Background-Laufzeit ermitteln - $brt = tv_interval($bst); - $rt = $rt.",".$brt; - - Log3 ($name, 5, "DbLog $name -> DbLog_deldaysNbl finished"); -return "$name|$rows|$rt|0"; -} - -######################################################################################### -# DBLog - deleteOldDays non-blocking Rückkehrfunktion -######################################################################################### -sub DbLog_deldaysNbl_done($) { - my ($string) = @_; - my @a = split("\\|",$string); - my $name = $a[0]; - my $hash = $defs{$name}; - my $rows = $a[1]; - my $bt = $a[2] if($a[2]); - my $err = decode_base64($a[3]) if ($a[3]); - - Log3 ($name, 5, "DbLog $name -> Start DbLog_deldaysNbl_done"); - - if ($err) { - readingsSingleUpdate($hash,"state",$err,1); - delete $hash->{HELPER}{DELDAYS_PID}; - Log3 ($name, 5, "DbLog $name -> DbLog_deldaysNbl_done finished"); - return; - } else { - if(AttrVal($name, "showproctime", undef) && $bt) { - my ($rt,$brt) = split(",", $bt); - readingsBeginUpdate($hash); - readingsBulkUpdate($hash, "background_processing_time", sprintf("%.4f",$brt)); - readingsBulkUpdate($hash, "sql_processing_time", sprintf("%.4f",$rt)); - readingsEndUpdate($hash, 1); - } - readingsSingleUpdate($hash, "lastRowsDeleted", $rows ,1); - } - my $db = (split(/;|=/, $hash->{dbconn}))[1]; - Log3 ($name, 3, "DbLog $name -> deleteOldDaysNbl finished. $rows entries of database $db deleted."); - delete $hash->{HELPER}{DELDAYS_PID}; - Log3 ($name, 5, "DbLog $name -> DbLog_deldaysNbl_done finished"); -return; -} - -################################################################ -# benutzte DB-Feldlängen in Helper und Internals setzen -################################################################ -sub DbLog_setinternalcols ($){ - my ($hash)= @_; - my $name = $hash->{NAME}; - - $hash->{HELPER}{DEVICECOL} = $columns{DEVICE}; - $hash->{HELPER}{TYPECOL} = $columns{TYPE}; - $hash->{HELPER}{EVENTCOL} = AttrVal($name, "colEvent", $columns{EVENT}); - $hash->{HELPER}{READINGCOL} = AttrVal($name, "colReading", $columns{READING}); - $hash->{HELPER}{VALUECOL} = AttrVal($name, "colValue", $columns{VALUE}); - $hash->{HELPER}{UNITCOL} = $columns{UNIT}; - - $hash->{COLUMNS} = "field length used for Device: $hash->{HELPER}{DEVICECOL}, Type: $hash->{HELPER}{TYPECOL}, Event: $hash->{HELPER}{EVENTCOL}, Reading: $hash->{HELPER}{READINGCOL}, Value: $hash->{HELPER}{VALUECOL}, Unit: $hash->{HELPER}{UNITCOL} "; - - # Statusbit "Columns sind gesetzt" - $hash->{HELPER}{COLSET} = 1; - -return; -} - -################################################################ -# reopen DB-Connection nach Ablauf set ... reopen [n] seconds -################################################################ -sub DbLog_reopen ($){ - my ($hash) = @_; - my $name = $hash->{NAME}; - my $async = AttrVal($name, "asyncMode", undef); - - RemoveInternalTimer($hash, "DbLog_reopen"); - - if(DbLog_ConnectPush($hash)) { - # Statusbit "Kein Schreiben in DB erlauben" löschen - my $delay = delete $hash->{HELPER}{REOPEN_RUNS}; - delete $hash->{HELPER}{REOPEN_RUNS_UNTIL}; - Log3($name, 2, "DbLog $name: Database connection reopened (it was $delay seconds closed).") if($delay); - readingsSingleUpdate($hash, "state", "reopened", 1); - $hash->{HELPER}{OLDSTATE} = "reopened"; - DbLog_execmemcache($hash) if($async); - } else { - InternalTimer(gettimeofday()+30, "DbLog_reopen", $hash, 0); - } - -return; -} - -################################################################ -# check ob primary key genutzt wird -################################################################ -sub DbLog_checkUsePK ($$){ - my ($hash,$dbh) = @_; - my $name = $hash->{NAME}; - my $dbconn = $hash->{dbconn}; - my $upkh = 0; - my $upkc = 0; - my (@pkh,@pkc); - - my $db = (split("=",(split(";",$dbconn))[0]))[1]; - eval {@pkh = $dbh->primary_key( undef, undef, 'history' );}; - eval {@pkc = $dbh->primary_key( undef, undef, 'current' );}; - my $pkh = (!@pkh || @pkh eq "")?"none":join(",",@pkh); - my $pkc = (!@pkc || @pkc eq "")?"none":join(",",@pkc); - $pkh =~ tr/"//d; - $pkc =~ tr/"//d; - $upkh = 1 if(@pkh && @pkh ne "none"); - $upkc = 1 if(@pkc && @pkc ne "none"); - Log3 $hash->{NAME}, 5, "DbLog $name -> Primary Key used in $db.history: $pkh"; - Log3 $hash->{NAME}, 5, "DbLog $name -> Primary Key used in $db.current: $pkc"; - - return ($upkh,$upkc,$pkh,$pkc); -} - -################################################################ -# Routine für FHEMWEB Detailanzeige -################################################################ -sub DbLog_fhemwebFn($$$$) { - my ($FW_wname, $d, $room, $pageHash) = @_; # pageHash is set for summaryFn. - - my $ret; - my $newIdx=1; - while($defs{"SVG_${d}_$newIdx"}) { - $newIdx++; - } - my $name = "SVG_${d}_$newIdx"; - $ret .= FW_pH("cmd=define $name SVG $d:templateDB:HISTORY;". - "set $name copyGplotFile&detail=$name", - "
Create SVG plot from DbLog
", 0, "dval", 1); -return $ret; -} - -################################################################ -# Dropdown-Menü cuurent-Tabelle SVG-Editor -################################################################ -sub DbLog_sampleDataFn($$$$$) { - my ($dlName, $dlog, $max, $conf, $wName) = @_; - my $desc = "Device:Reading"; - my @htmlArr; - my @example; - my @colregs; - my $counter; - my $currentPresent = AttrVal($dlName,'DbLogType','History'); - - my $dbhf = DbLog_ConnectNewDBH($defs{$dlName}); - return if(!$dbhf); - - # check presence of table current - # avoids fhem from crash if table 'current' is not present and attr DbLogType is set to /Current/ - my $prescurr = eval {$dbhf->selectrow_array("select count(*) from current");} || 0; - Log3($dlName, 5, "DbLog $dlName: Table current present : $prescurr (0 = not present or no content)"); - - if($currentPresent =~ m/Current|SampleFill/ && $prescurr) { - # Table Current present, use it for sample data - my $query = "select device,reading from current where device <> '' group by device,reading"; - my $sth = $dbhf->prepare( $query ); - $sth->execute(); - while (my @line = $sth->fetchrow_array()) { - $counter++; - push (@example, join (" ",@line)) if($counter <= 8); # show max 8 examples - push (@colregs, "$line[0]:$line[1]"); # push all eventTypes to selection list - } - $dbhf->disconnect(); - my $cols = join(",", sort { "\L$a" cmp "\L$b" } @colregs); - - # $max = 8 if($max > 8); # auskommentiert 27.02.2018, Notwendigkeit unklar (forum:#76008) - for(my $r=0; $r < $max; $r++) { - my @f = split(":", ($dlog->[$r] ? $dlog->[$r] : ":::"), 4); - my $ret = ""; - $ret .= SVG_sel("par_${r}_0", $cols, "$f[0]:$f[1]"); -# $ret .= SVG_txt("par_${r}_2", "", $f[2], 1); # Default not yet implemented -# $ret .= SVG_txt("par_${r}_3", "", $f[3], 3); # Function -# $ret .= SVG_txt("par_${r}_4", "", $f[4], 3); # RegExp - push @htmlArr, $ret; - } - - } else { - # Table Current not present, so create an empty input field - push @example, "No sample data due to missing table 'Current'"; - - # $max = 8 if($max > 8); # auskommentiert 27.02.2018, Notwendigkeit unklar (forum:#76008) - for(my $r=0; $r < $max; $r++) { - my @f = split(":", ($dlog->[$r] ? $dlog->[$r] : ":::"), 4); - my $ret = ""; - no warnings 'uninitialized'; # Forum:74690, bug unitialized - $ret .= SVG_txt("par_${r}_0", "", "$f[0]:$f[1]:$f[2]:$f[3]", 20); - use warnings; -# $ret .= SVG_txt("par_${r}_2", "", $f[2], 1); # Default not yet implemented -# $ret .= SVG_txt("par_${r}_3", "", $f[3], 3); # Function -# $ret .= SVG_txt("par_${r}_4", "", $f[4], 3); # RegExp - push @htmlArr, $ret; - } - - } - -return ($desc, \@htmlArr, join("
", @example)); -} - -################################################################ -# -# Charting Specific functions start here -# -################################################################ - -################################################################ -# -# Error handling, returns a JSON String -# -################################################################ -sub DbLog_jsonError($) { - my $errormsg = $_[0]; - my $json = '{"success": "false", "msg":"'.$errormsg.'"}'; - return $json; -} - - -################################################################ -# -# Prepare the SQL String -# -################################################################ -sub DbLog_prepareSql(@) { - - my ($hash, @a) = @_; - my $starttime = $_[5]; - $starttime =~ s/_/ /; - my $endtime = $_[6]; - $endtime =~ s/_/ /; - my $device = $_[7]; - my $userquery = $_[8]; - my $xaxis = $_[9]; - my $yaxis = $_[10]; - my $savename = $_[11]; - my $jsonChartConfig = $_[12]; - my $pagingstart = $_[13]; - my $paginglimit = $_[14]; - my $dbmodel = $hash->{MODEL}; - my ($sql, $jsonstring, $countsql, $hourstats, $daystats, $weekstats, $monthstats, $yearstats); - - if ($dbmodel eq "POSTGRESQL") { - ### POSTGRESQL Queries for Statistics ### - ### hour: - $hourstats = "SELECT to_char(timestamp, 'YYYY-MM-DD HH24:00:00') AS TIMESTAMP, SUM(VALUE::float) AS SUM, "; - $hourstats .= "AVG(VALUE::float) AS AVG, MIN(VALUE::float) AS MIN, MAX(VALUE::float) AS MAX, "; - $hourstats .= "COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $hourstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### day: - $daystats = "SELECT to_char(timestamp, 'YYYY-MM-DD 00:00:00') AS TIMESTAMP, SUM(VALUE::float) AS SUM, "; - $daystats .= "AVG(VALUE::float) AS AVG, MIN(VALUE::float) AS MIN, MAX(VALUE::float) AS MAX, "; - $daystats .= "COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $daystats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### week: - $weekstats = "SELECT date_trunc('week',timestamp) AS TIMESTAMP, SUM(VALUE::float) AS SUM, "; - $weekstats .= "AVG(VALUE::float) AS AVG, MIN(VALUE::float) AS MIN, MAX(VALUE::float) AS MAX, "; - $weekstats .= "COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $weekstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### month: - $monthstats = "SELECT to_char(timestamp, 'YYYY-MM-01 00:00:00') AS TIMESTAMP, SUM(VALUE::float) AS SUM, "; - $monthstats .= "AVG(VALUE::float) AS AVG, MIN(VALUE::float) AS MIN, MAX(VALUE::float) AS MAX, "; - $monthstats .= "COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $monthstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### year: - $yearstats = "SELECT to_char(timestamp, 'YYYY-01-01 00:00:00') AS TIMESTAMP, SUM(VALUE::float) AS SUM, "; - $yearstats .= "AVG(VALUE::float) AS AVG, MIN(VALUE::float) AS MIN, MAX(VALUE::float) AS MAX, "; - $yearstats .= "COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $yearstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - } elsif ($dbmodel eq "MYSQL") { - ### MYSQL Queries for Statistics ### - ### hour: - $hourstats = "SELECT date_format(timestamp, '%Y-%m-%d %H:00:00') AS TIMESTAMP, SUM(CAST(VALUE AS DECIMAL(12,4))) AS SUM, "; - $hourstats .= "AVG(CAST(VALUE AS DECIMAL(12,4))) AS AVG, MIN(CAST(VALUE AS DECIMAL(12,4))) AS MIN, "; - $hourstats .= "MAX(CAST(VALUE AS DECIMAL(12,4))) AS MAX, COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' "; - $hourstats .= "AND DEVICE = '$device' AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### day: - $daystats = "SELECT date_format(timestamp, '%Y-%m-%d 00:00:00') AS TIMESTAMP, SUM(CAST(VALUE AS DECIMAL(12,4))) AS SUM, "; - $daystats .= "AVG(CAST(VALUE AS DECIMAL(12,4))) AS AVG, MIN(CAST(VALUE AS DECIMAL(12,4))) AS MIN, "; - $daystats .= "MAX(CAST(VALUE AS DECIMAL(12,4))) AS MAX, COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' "; - $daystats .= "AND DEVICE = '$device' AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### week: - $weekstats = "SELECT date_format(timestamp, '%Y-%m-%d 00:00:00') AS TIMESTAMP, SUM(CAST(VALUE AS DECIMAL(12,4))) AS SUM, "; - $weekstats .= "AVG(CAST(VALUE AS DECIMAL(12,4))) AS AVG, MIN(CAST(VALUE AS DECIMAL(12,4))) AS MIN, "; - $weekstats .= "MAX(CAST(VALUE AS DECIMAL(12,4))) AS MAX, COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' "; - $weekstats .= "AND DEVICE = '$device' AND TIMESTAMP Between '$starttime' AND '$endtime' "; - $weekstats .= "GROUP BY date_format(timestamp, '%Y-%u 00:00:00') ORDER BY 1;"; - - ### month: - $monthstats = "SELECT date_format(timestamp, '%Y-%m-01 00:00:00') AS TIMESTAMP, SUM(CAST(VALUE AS DECIMAL(12,4))) AS SUM, "; - $monthstats .= "AVG(CAST(VALUE AS DECIMAL(12,4))) AS AVG, MIN(CAST(VALUE AS DECIMAL(12,4))) AS MIN, "; - $monthstats .= "MAX(CAST(VALUE AS DECIMAL(12,4))) AS MAX, COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' "; - $monthstats .= "AND DEVICE = '$device' AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - ### year: - $yearstats = "SELECT date_format(timestamp, '%Y-01-01 00:00:00') AS TIMESTAMP, SUM(CAST(VALUE AS DECIMAL(12,4))) AS SUM, "; - $yearstats .= "AVG(CAST(VALUE AS DECIMAL(12,4))) AS AVG, MIN(CAST(VALUE AS DECIMAL(12,4))) AS MIN, "; - $yearstats .= "MAX(CAST(VALUE AS DECIMAL(12,4))) AS MAX, COUNT(VALUE) AS COUNT FROM history WHERE READING = '$yaxis' "; - $yearstats .= "AND DEVICE = '$device' AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY 1 ORDER BY 1;"; - - } elsif ($dbmodel eq "SQLITE") { - ### SQLITE Queries for Statistics ### - ### hour: - $hourstats = "SELECT TIMESTAMP, SUM(CAST(VALUE AS FLOAT)) AS SUM, AVG(CAST(VALUE AS FLOAT)) AS AVG, "; - $hourstats .= "MIN(CAST(VALUE AS FLOAT)) AS MIN, MAX(CAST(VALUE AS FLOAT)) AS MAX, COUNT(VALUE) AS COUNT "; - $hourstats .= "FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $hourstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY strftime('%Y-%m-%d %H:00:00', TIMESTAMP);"; - - ### day: - $daystats = "SELECT TIMESTAMP, SUM(CAST(VALUE AS FLOAT)) AS SUM, AVG(CAST(VALUE AS FLOAT)) AS AVG, "; - $daystats .= "MIN(CAST(VALUE AS FLOAT)) AS MIN, MAX(CAST(VALUE AS FLOAT)) AS MAX, COUNT(VALUE) AS COUNT "; - $daystats .= "FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $daystats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY strftime('%Y-%m-%d 00:00:00', TIMESTAMP);"; - - ### week: - $weekstats = "SELECT TIMESTAMP, SUM(CAST(VALUE AS FLOAT)) AS SUM, AVG(CAST(VALUE AS FLOAT)) AS AVG, "; - $weekstats .= "MIN(CAST(VALUE AS FLOAT)) AS MIN, MAX(CAST(VALUE AS FLOAT)) AS MAX, COUNT(VALUE) AS COUNT "; - $weekstats .= "FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $weekstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY strftime('%Y-%W 00:00:00', TIMESTAMP);"; - - ### month: - $monthstats = "SELECT TIMESTAMP, SUM(CAST(VALUE AS FLOAT)) AS SUM, AVG(CAST(VALUE AS FLOAT)) AS AVG, "; - $monthstats .= "MIN(CAST(VALUE AS FLOAT)) AS MIN, MAX(CAST(VALUE AS FLOAT)) AS MAX, COUNT(VALUE) AS COUNT "; - $monthstats .= "FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $monthstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY strftime('%Y-%m 00:00:00', TIMESTAMP);"; - - ### year: - $yearstats = "SELECT TIMESTAMP, SUM(CAST(VALUE AS FLOAT)) AS SUM, AVG(CAST(VALUE AS FLOAT)) AS AVG, "; - $yearstats .= "MIN(CAST(VALUE AS FLOAT)) AS MIN, MAX(CAST(VALUE AS FLOAT)) AS MAX, COUNT(VALUE) AS COUNT "; - $yearstats .= "FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $yearstats .= "AND TIMESTAMP Between '$starttime' AND '$endtime' GROUP BY strftime('%Y 00:00:00', TIMESTAMP);"; - - } else { - $sql = "errordb"; - } - - if($userquery eq "getreadings") { - $sql = "SELECT distinct(reading) FROM history WHERE device = '".$device."'"; - } elsif($userquery eq "getdevices") { - $sql = "SELECT distinct(device) FROM history"; - } elsif($userquery eq "timerange") { - $sql = "SELECT ".$xaxis.", VALUE FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' AND TIMESTAMP Between '$starttime' AND '$endtime' ORDER BY TIMESTAMP;"; - } elsif($userquery eq "hourstats") { - $sql = $hourstats; - } elsif($userquery eq "daystats") { - $sql = $daystats; - } elsif($userquery eq "weekstats") { - $sql = $weekstats; - } elsif($userquery eq "monthstats") { - $sql = $monthstats; - } elsif($userquery eq "yearstats") { - $sql = $yearstats; - } elsif($userquery eq "savechart") { - $sql = "INSERT INTO frontend (TYPE, NAME, VALUE) VALUES ('savedchart', '$savename', '$jsonChartConfig')"; - } elsif($userquery eq "renamechart") { - $sql = "UPDATE frontend SET NAME = '$savename' WHERE ID = '$jsonChartConfig'"; - } elsif($userquery eq "deletechart") { - $sql = "DELETE FROM frontend WHERE TYPE = 'savedchart' AND ID = '".$savename."'"; - } elsif($userquery eq "updatechart") { - $sql = "UPDATE frontend SET VALUE = '$jsonChartConfig' WHERE ID = '".$savename."'"; - } elsif($userquery eq "getcharts") { - $sql = "SELECT * FROM frontend WHERE TYPE = 'savedchart'"; - } elsif($userquery eq "getTableData") { - if ($device ne '""' && $yaxis ne '""') { - $sql = "SELECT * FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $sql .= "AND TIMESTAMP Between '$starttime' AND '$endtime'"; - $sql .= " LIMIT '$paginglimit' OFFSET '$pagingstart'"; - $countsql = "SELECT count(*) FROM history WHERE READING = '$yaxis' AND DEVICE = '$device' "; - $countsql .= "AND TIMESTAMP Between '$starttime' AND '$endtime'"; - } elsif($device ne '""' && $yaxis eq '""') { - $sql = "SELECT * FROM history WHERE DEVICE = '$device' "; - $sql .= "AND TIMESTAMP Between '$starttime' AND '$endtime'"; - $sql .= " LIMIT '$paginglimit' OFFSET '$pagingstart'"; - $countsql = "SELECT count(*) FROM history WHERE DEVICE = '$device' "; - $countsql .= "AND TIMESTAMP Between '$starttime' AND '$endtime'"; - } else { - $sql = "SELECT * FROM history"; - $sql .= " WHERE TIMESTAMP Between '$starttime' AND '$endtime'"; - $sql .= " LIMIT '$paginglimit' OFFSET '$pagingstart'"; - $countsql = "SELECT count(*) FROM history"; - $countsql .= " WHERE TIMESTAMP Between '$starttime' AND '$endtime'"; - } - return ($sql, $countsql); - } else { - $sql = "error"; - } - - return $sql; -} - -################################################################ -# -# Do the query -# -################################################################ -sub DbLog_chartQuery($@) { - - my ($sql, $countsql) = DbLog_prepareSql(@_); - - if ($sql eq "error") { - return DbLog_jsonError("Could not setup SQL String. Maybe the Database is busy, please try again!"); - } elsif ($sql eq "errordb") { - return DbLog_jsonError("The Database Type is not supported!"); - } - - my ($hash, @a) = @_; - my $dbhf = DbLog_ConnectNewDBH($hash); - return if(!$dbhf); - - my $totalcount; - - if (defined $countsql && $countsql ne "") { - my $query_handle = $dbhf->prepare($countsql) - or return DbLog_jsonError("Could not prepare statement: " . $dbhf->errstr . ", SQL was: " .$countsql); - - $query_handle->execute() - or return DbLog_jsonError("Could not execute statement: " . $query_handle->errstr); - - my @data = $query_handle->fetchrow_array(); - $totalcount = join(", ", @data); - - } - - # prepare the query - my $query_handle = $dbhf->prepare($sql) - or return DbLog_jsonError("Could not prepare statement: " . $dbhf->errstr . ", SQL was: " .$sql); - - # execute the query - $query_handle->execute() - or return DbLog_jsonError("Could not execute statement: " . $query_handle->errstr); - - my $columns = $query_handle->{'NAME'}; - my $columncnt; - - # When columns are empty but execution was successful, we have done a successful INSERT, UPDATE or DELETE - if($columns) { - $columncnt = scalar @$columns; - } else { - return '{"success": "true", "msg":"All ok"}'; - } - - my $i = 0; - my $jsonstring = '{"data":['; - - while ( my @data = $query_handle->fetchrow_array()) { - - if($i == 0) { - $jsonstring .= '{'; - } else { - $jsonstring .= ',{'; - } - - for ($i = 0; $i < $columncnt; $i++) { - $jsonstring .= '"'; - $jsonstring .= uc($query_handle->{NAME}->[$i]); - $jsonstring .= '":'; - - if (defined $data[$i]) { - my $fragment = substr($data[$i],0,1); - if ($fragment eq "{") { - $jsonstring .= $data[$i]; - } else { - $jsonstring .= '"'.$data[$i].'"'; - } - } else { - $jsonstring .= '""' - } - - if($i != ($columncnt -1)) { - $jsonstring .= ','; - } - } - $jsonstring .= '}'; - } - $dbhf->disconnect(); - $jsonstring .= ']'; - if (defined $totalcount && $totalcount ne "") { - $jsonstring .= ',"totalCount": '.$totalcount.'}'; - } else { - $jsonstring .= '}'; - } -return $jsonstring; -} - -# -# get ReadingsVal -# get ReadingsTimestamp -# -sub DbLog_dbReadings($@) { - my($hash,@a) = @_; - - my $dbhf = DbLog_ConnectNewDBH($hash); - return if(!$dbhf); - - return 'Wrong Syntax for ReadingsVal!' unless defined($a[4]); - my $DbLogType = AttrVal($a[0],'DbLogType','current'); - my $query; - if (lc($DbLogType) =~ m(current) ) { - $query = "select VALUE,TIMESTAMP from current where DEVICE= '$a[2]' and READING= '$a[3]'"; - } else { - $query = "select VALUE,TIMESTAMP from history where DEVICE= '$a[2]' and READING= '$a[3]' order by TIMESTAMP desc limit 1"; - } - my ($reading,$timestamp) = $dbhf->selectrow_array($query); - $dbhf->disconnect(); - - $reading = (defined($reading)) ? $reading : $a[4]; - $timestamp = (defined($timestamp)) ? $timestamp : $a[4]; - return $reading if $a[1] eq 'ReadingsVal'; - return $timestamp if $a[1] eq 'ReadingsTimestamp'; - return "Syntax error: $a[1]"; -} - -1; - -=pod -=item helper -=item summary logs events into a database -=item summary_DE loggt Events in eine Datenbank -=begin html - - -

DbLog

-
    -
    - With DbLog events can be stored in a database. SQLite, MySQL/MariaDB and PostgreSQL are supported databases.

    - - Prereqisites

    - - The Perl-modules DBI and DBD::<dbtype> are needed to be installed (use cpan -i <module> - if your distribution does not have it). -

    - - On a debian based system you may install these modules for instance by:

    - -
      - - - - - - -
      DBI : sudo apt-get install libdbi-perl
      MySQL : sudo apt-get install [mysql-server] mysql-client libdbd-mysql libdbd-mysql-perl (mysql-server only if you use a local MySQL-server installation)
      SQLite : sudo apt-get install sqlite3 libdbi-perl libdbd-sqlite3-perl
      PostgreSQL : sudo apt-get install libdbd-pg-perl
      -
    -
    -
    - - Preparations

    - - At first you need to setup the database.
    - Sample code and Scripts to prepare a MySQL/PostgreSQL/SQLite database you can find in - SVN -> contrib/dblog/db_create_<DBType>.sql.
    - (Caution: The local FHEM-Installation subdirectory ./contrib/dblog doesn't contain the freshest scripts !!) -

    - - The database contains two tables: current and history.
    - The latter contains all events whereas the former only contains the last event for any given reading and device. - Please consider the attribute DbLogType implicitly to determine the usage of tables - current and history. -

    - - The columns have the following meaning:

    - -
      - - - - - - - - - -
      TIMESTAMP : timestamp of event, e.g. 2007-12-30 21:45:22
      DEVICE : device name, e.g. Wetterstation
      TYPE : device type, e.g. KS300
      EVENT : event specification as full string, e.g. humidity: 71 (%)
      READING : name of reading extracted from event, e.g. humidity
      VALUE : actual reading extracted from event, e.g. 71
      UNIT : unit extracted from event, e.g. %
      -
    -
    -
    - - create index
    - Due to reading performance, e.g. on creation of SVG-plots, it is very important that the index "Search_Idx" - or a comparable index (e.g. a primary key) is applied. - A sample code for creation of that index is also available in mentioned scripts of - SVN -> contrib/dblog/db_create_<DBType>.sql. -

    - - The index "Search_Idx" can be created, e.g. in database 'fhem', by these statements (also subsequently):

    - -
      - - - - - -
      MySQL : CREATE INDEX Search_Idx ON `fhem`.`history` (DEVICE, READING, TIMESTAMP);
      SQLite : CREATE INDEX Search_Idx ON `history` (DEVICE, READING, TIMESTAMP);
      PostgreSQL : CREATE INDEX "Search_Idx" ON history USING btree (device, reading, "timestamp");
      -
    -
    - - For the connection to the database a configuration file is used. - The configuration is stored in a separate file to avoid storing the password in the main configuration file and to have it - visible in the output of the list command. -

    - - The configuration file should be copied e.g. to /opt/fhem and has the following structure you have to customize - suitable to your conditions (decomment the appropriate raws and adjust it):

    - -
    -    ####################################################################################
    -    # database configuration file     
    -    # 	
    -    # NOTE:
    -    # If you don't use a value for user / password please delete the leading hash mark
    -    # and write 'user => ""' respectively 'password => ""' instead !	
    -    #
    -    #
    -    ## for MySQL                                                      
    -    ####################################################################################
    -    #%dbconfig= (                                                    
    -    #    connection => "mysql:database=fhem;host=<database host>;port=3306",       
    -    #    user => "fhemuser",                                          
    -    #    password => "fhempassword",
    -    #    # optional enable(1) / disable(0) UTF-8 support (at least V 4.042 is necessary) 	
    -    #    utf8 => 1   
    -    #);                                                              
    -    ####################################################################################
    -    #                                                                
    -    ## for PostgreSQL                                                
    -    ####################################################################################
    -    #%dbconfig= (                                                   
    -    #    connection => "Pg:database=fhem;host=<database host>",        
    -    #    user => "fhemuser",                                     
    -    #    password => "fhempassword"                              
    -    #);                                                              
    -    ####################################################################################
    -    #                                                                
    -    ## for SQLite (username and password stay empty for SQLite)      
    -    ####################################################################################
    -    #%dbconfig= (                                                   
    -    #    connection => "SQLite:dbname=/opt/fhem/fhem.db",        
    -    #    user => "",                                             
    -    #    password => ""                                          
    -    #);                                                              
    -    ####################################################################################
    -	
    - If configDB is used, the configuration file has to be uploaded into the configDB !

    - - Note about special characters:
    - If special characters, e.g. @,$ or % which have a meaning in the perl programming - language are used in a password, these special characters have to be escaped. - That means in this example you have to use: \@,\$ respectively \%. -
    -
    -
    - - - Define -
      -
      - - define <name> DbLog <configfilename> <regexp> -

      - - <configfilename> is the prepared configuration file.
      - <regexp> is identical to the specification of regex in the FileLog definition. -

      - - Example: -
        - define myDbLog DbLog /etc/fhem/db.conf .*:.*
        - all events will stored into the database -
      -
      - - After you have defined your DbLog-device it is recommended to run the configuration check

      -
        - set <name> configCheck
        -
      -
      - - This check reports some important settings and gives recommendations back to you if proposals are indentified. -

      - - DbLog distinguishes between the synchronous (default) and asynchronous logmode. The logmode is adjustable by the - attribute asyncMode. Since version 2.13.5 DbLog is supporting primary key (PK) set in table - current or history. If you want use PostgreSQL with PK it has to be at lest version 9.5. -

      - - The content of VALUE will be optimized for automated post-processing, e.g. yes is translated to 1 -

      - - The stored values can be retrieved by the following code like FileLog:
      -
        - get myDbLog - - 2012-11-10 2012-11-10 KS300:temperature:: -
      -
      - - transfer FileLog-data to DbLog

      - There is the special module 98_FileLogConvert.pm available to transfer filelog-data to the DbLog-database.
      - The module can be downloaded here - or from directory ./contrib instead. - Further informations and help you can find in the corresponding - Forumthread .


      - - Reporting and Management of DbLog database content

      - By using SVG database content can be visualized.
      - Beyond that the module DbRep can be used to prepare tabular - database reports or you can manage the database content with available functions of that module. -


      - - Troubleshooting

      - If after successful definition the DbLog-device doesn't work as expected, the following notes may help: -

      - -
        -
      • Have the preparatory steps as described in commandref been done ? (install software components, create tables and index)
      • -
      • Was "set <name> configCheck" executed after definition and potential errors fixed or rather the hints implemented ?
      • -
      • If configDB is used ... has the database configuration file been imported into configDB (e.g. by "configDB fileimport ./db.conf") ?
      • -
      • When creating a SVG-plot and no drop-down list with proposed values appear -> set attribute "DbLogType" to "Current/History".
      • -
      -
      - - If the notes don't lead to success, please increase verbose level of the DbLog-device to 4 or 5 and observe entries in - logfile relating to the DbLog-device. - - For problem analysis please post the output of "list <name>", the result of "set <name> configCheck" and the - logfile entries of DbLog-device to the forum thread. -

      - -
    -
    -
    - - - - Set -
      - set <name> addCacheLine YYYY-MM-DD HH:MM:SS|<device>|<type>|<event>|<reading>|<value>|[<unit>]

      -
        In asynchronous mode a new dataset is inserted to the Cache and will be processed at the next database sync cycle. -

        - - Example:
        - set <name> addCacheLine 2017-12-05 17:03:59|MaxBathRoom|MAX|valveposition: 95|valveposition|95|%
        -

      - - set <name> addLog <devspec>:<Reading> [Value] [CN=<caller name>] [!useExcludes]

      -
        Inserts an additional log entry of a device/reading combination into the database.

        - -
          -
        • <devspec>:<Reading> - The device can be declared by a device specification - (devspec). "Reading" will be evaluated as regular expression. If - The reading isn't available and the value "Value" is specified, the - reading will be added to database as new one if it isn't a regular - expression and the readingname is valid.
        • -
        • Value - Optionally you can enter a "Value" that is used as reading value in the dataset. If the value isn't - specified (default), the current value of the specified reading will be inserted into the database.
        • -
        • CN=<caller name> - By the key "CN=" (Caller Name) you can specify an additional string, - e.g. the name of a calling device (for example an at- or notify-device). - Via the function defined in attribute "valueFn" this key can be analyzed - by the variable $CN. Thereby it is possible to control the behavior of the addLog dependend from - the calling source.
        • -
        • !useExcludes - The function considers attribute "DbLogExclude" in the source device if it is set. If the optional - keyword "!useExcludes" is set, the attribute "DbLogExclude" isn't considered.
        • -
        -
        - - The database field "EVENT" will be filled with the string "addLog" automatically.
        - The addLog-command dosn't create an additional event in your system !

        - - Examples:
        - set <name> addLog SMA_Energymeter:Bezug_Wirkleistung
        - set <name> addLog TYPE=SSCam:state
        - set <name> addLog MyWetter:(fc10.*|fc8.*)
        - set <name> addLog MyWetter:(wind|wind_ch.*) 20 !useExcludes
        - set <name> addLog TYPE=CUL_HM:FILTER=model=HM-CC-RT-DN:FILTER=subType!=(virtual|):(measured-temp|desired-temp|actuator)

        - - set <name> addLog USV:state CN=di.cronjob
        - In the valueFn-function the caller "di.cronjob" is evaluated via the variable $CN and the timestamp is corrected:

        - valueFn = if($CN eq "di.cronjob" and $TIMESTAMP =~ m/\s00:00:[\d:]+/) { $TIMESTAMP =~ s/\s([^\s]+)/ 23:59:59/ } - -

      - - set <name> clearReadings

      -
        This function clears readings which were created by different DbLog-functions.

      - - set <name> commitCache

      -
        In asynchronous mode (attribute asyncMode=1), the cached data in memory will be written into the database - and subsequently the cache will be cleared. Thereby the internal timer for the asynchronous mode Modus will be set new. - The command can be usefull in case of you want to write the cached data manually or e.g. by an AT-device on a defined - point of time into the database.

      - - set <name> configCheck

      -
        This command checks some important settings and give recommendations back to you if proposals are identified. -

      - - set <name> count

      -
        Count records in tables current and history and write results into readings countCurrent and countHistory.

      - - set <name> countNbl

      -
        The non-blocking execution of "set <name> count".

      - - set <name> deleteOldDays <n>

      -
        Delete records from history older than <n> days. Number of deleted records will be written into reading - lastRowsDeleted. -

      - - set <name> deleteOldDaysNbl <n>

      -
        - Is identical to function "deleteOldDays" whereupon deleteOldDaysNbl will be executed non-blocking. -

        - - Note:
        - Even though the function itself is non-blocking, you have to set DbLog into the asynchronous mode (attr asyncMode = 1) to - avoid a blocking situation of FHEM ! - -
      -
      - - set <name> eraseReadings

      -
        This function deletes all readings except reading "state".

      - - - set <name> exportCache [nopurge | purgecache]

      -
        If DbLog is operating in asynchronous mode, it's possible to exoprt the cache content into a textfile. - The file will be written to the directory (global->modpath)/log/ by default setting. The detination directory can be - changed by the attribute expimpdir.
        - The filename will be generated automatically and is built by a prefix "cache_", followed by DbLog-devicename and the - present timestmp, e.g. "cache_LogDB_2017-03-23_22-13-55".
        - There are two options possible, "nopurge" respectively "purgecache". The option determines whether the cache content - will be deleted after export or not. - Using option "nopurge" (default) the cache content will be preserved.
        - The attribute "exportCacheAppend" defines, whether every export process creates a new export file - (default) or the cache content is appended to an existing (newest) export file. -

      - - set <name> importCachefile <file>

      -
        Imports an textfile into the database which has been written by the "exportCache" function. - The allocatable files will be searched in directory (global->modpath)/log/ by default and a drop-down list will be - generated from the files which are found in the directory. - The source directory can be changed by the attribute expimpdir.
        - Only that files will be shown which are correlate on pattern starting with "cache_", followed by the DbLog-devicename.
        - For example a file with the name "cache_LogDB_2017-03-23_22-13-55", will match if Dblog-device has name "LogDB".
        - After the import has been successfully done, a prefix "impdone_" will be added at begin of the filename and this file - ddoesn't appear on the drop-down list anymore.
        - If you want to import a cachefile from another source database, you may adapt the filename so it fits the search criteria - "DbLog-Device" in its name. After renaming the file appeares again on the drop-down list.

      - - set <name> listCache

      -
        If DbLog is set to asynchronous mode (attribute asyncMode=1), you can use that command to list the events are cached in memory.

      - - set <name> purgeCache

      -
        In asynchronous mode (attribute asyncMode=1), the in memory cached data will be deleted. - With this command data won't be written from cache into the database.

      - - set <name> reduceLog <no>[:<nn>] [average[=day]] [exclude=device1:reading1,device2:reading2,...]

      -
        Reduces records older than <no> days and (optional) newer than <nn> days to one record (the 1st) each hour per device & reading.
        - Within the device/reading name SQL-Wildcards "%" and "_" can be used.

        - - With the optional argument 'average' not only the records will be reduced, but all numerical values of an hour - will be reduced to a single average.
        - With the optional argument 'average=day' not only the records will be reduced, but all numerical values of a - day will be reduced to a single average. (implies 'average')

        - - You can optional set the last argument to "exclude=device1:reading1,device2:reading2,..." to exclude - device/readings from reduceLog.
        - Also you can optional set the last argument to "include=device:reading" to delimit the SELECT statement which - is executed on the database. This may reduce the system RAM load and increases the performance.

        - -
          - Example:
          - set <name> reduceLog 270 average include=Luftdaten_remote:%
          -
        -
        - - CAUTION: It is strongly recommended to check if the default INDEX 'Search_Idx' exists on the table 'history'!
        - The execution of this command may take (without INDEX) extremely long. FHEM will be blocked completely after issuing the command to completion !

        - -

      - - set <name> reduceLogNbl <no>[:<nn>] [average[=day]] [exclude=device1:reading1,device2:reading2,...]

      -
        Same function as "set <name> reduceLog" but FHEM won't be blocked due to this function is implemented - non-blocking !

        - - Note:
        - Even though the function itself is non-blocking, you have to set DbLog into the asynchronous mode (attr asyncMode = 1) to - avoid a blocking situation of FHEM ! - -

      - - set <name> reopen [n]

      -
        Perform a database disconnect and immediate reconnect to clear cache and flush journal file if no time [n] was set.
        - If optionally a delay time of [n] seconds was set, the database connection will be disconnect immediately but it was only reopened - after [n] seconds. In synchronous mode the events won't saved during that time. In asynchronous mode the events will be - stored in the memory cache and saved into database after the reconnect was done.

      - - set <name> rereadcfg

      -
        Perform a database disconnect and immediate reconnect to clear cache and flush journal file.
        - Probably same behavior als reopen, but rereadcfg will read the configuration data before reconnect.

      - - set <name> userCommand <validSqlStatement>

      -
        DO NOT USE THIS COMMAND UNLESS YOU REALLY (REALLY!) KNOW WHAT YOU ARE DOING!!!

        - Performs any (!!!) sql statement on connected database. Usercommand and result will be written into - corresponding readings.
        - The result can only be a single line. If the SQL-Statement seems to deliver a multiline result, it can be suitable - to use the analysis module DbRep.
        - If the database interface delivers no result (undef), the reading "userCommandResult" contains the message - "no result". -

      - -

    - - - Get -
      - get <name> ReadingsVal       <device> <reading> <default>
      - get <name> ReadingsTimestamp <device> <reading> <default>
      -
      - Retrieve one single value, use and syntax are similar to ReadingsVal() and ReadingsTimestamp() functions.
      -
    -
    -
    -
      - get <name> <infile> <outfile> <from> - <to> <column_spec> -

      - Read data from the Database, used by frontends to plot data without direct - access to the Database.
      - -
        -
      • <in>
        - A dummy parameter for FileLog compatibility. Sessing by defaultto -
        -
          -
        • current: reading actual readings from table "current"
        • -
        • history: reading history readings from table "history"
        • -
        • -: identical to "history"
        • -
        -
      • -
      • <out>
        - A dummy parameter for FileLog compatibility. Setting by default to - - to check the output for plot-computing.
        - Set it to the special keyword - all to get all columns from Database. -
          -
        • ALL: get all colums from table, including a header
        • -
        • Array: get the columns as array of hashes
        • -
        • INT: internally used by generating plots
        • -
        • -: default
        • -
        -
      • -
      • <from> / <to>
        - Used to select the data. Please use the following timeformat or - an initial substring of it:
        -
          YYYY-MM-DD_HH24:MI:SS
      • -
      • <column_spec>
        - For each column_spec return a set of data separated by - a comment line on the current connection.
        - Syntax: <device>:<reading>:<default>:<fn>:<regexp>
        -
          -
        • <device>
          - The name of the device. Case sensitive. Using a the joker "%" is supported.
        • -
        • <reading>
          - The reading of the given device to select. Case sensitive. Using a the joker "%" is supported. -
        • -
        • <default>
          - no implemented yet -
        • -
        • <fn> - One of the following: -
            -
          • int
            - Extract the integer at the beginning of the string. Used e.g. - for constructs like 10%
          • -
          • int<digit>
            - Extract the decimal digits including negative character and - decimal point at the beginning og the string. Used e.g. - for constructs like 15.7°C
          • -
          • delta-h / delta-d
            - Return the delta of the values for a given hour or a given day. - Used if the column contains a counter, as is the case for the - KS300 rain column.
          • -
          • delta-ts
            - Replaced the original value with a measured value of seconds since - the last and the actual logentry. -
          • -
        • -
        • <regexp>
          - The string is evaluated as a perl expression. The regexp is executed - before <fn> parameter.
          - Note: The string/perl expression cannot contain spaces, - as the part after the space will be considered as the - next column_spec.
          - Keywords -
        • $val is the current value returned from the Database.
        • -
        • $ts is the current timestamp returned from the Database.
        • -
        • This Logentry will not print out if $val contains th keyword "hide".
        • -
        • This Logentry will not print out and not used in the following processing - if $val contains th keyword "ignore".
        • - -
      • -
      -

      - Examples: -
        -
      • get myDbLog - - 2012-11-10 2012-11-20 KS300:temperature
      • -
      • get myDbLog current ALL - - %:temperature

      • - you will get all actual readings "temperature" from all logged devices. - Be careful by using "history" as inputfile because a long execution time will be expected! -
      • get myDbLog - - 2012-11-10_10 2012-11-10_20 KS300:temperature::int1
        - like from 10am until 08pm at 10.11.2012
      • -
      • get myDbLog - all 2012-11-10 2012-11-20 KS300:temperature
      • -
      • get myDbLog - - 2012-11-10 2012-11-20 KS300:temperature KS300:rain::delta-h KS300:rain::delta-d
      • -
      • get myDbLog - - 2012-11-10 2012-11-20 MyFS20:data:::$val=~s/(on|off).*/$1eq"on"?1:0/eg
        - return 1 for all occurance of on* (on|on-for-timer etc) and 0 for all off*
      • -
      • get myDbLog - - 2012-11-10 2012-11-20 Bodenfeuchte:data:::$val=~s/.*B:\s([-\.\d]+).*/$1/eg
        - Example of OWAD: value like this: "A: 49.527 % B: 66.647 % C: 9.797 % D: 0.097 V"
        - and output for port B is like this: 2012-11-20_10:23:54 66.647
      • -
      • get DbLog - - 2013-05-26 2013-05-28 Pumpe:data::delta-ts:$val=~s/on/hide/
        - Setting up a "Counter of Uptime". The function delta-ts gets the seconds between the last and the - actual logentry. The keyword "hide" will hide the logentry of "on" because this time - is a "counter of Downtime"
      • - -
      -

      -
    - - Get when used for webcharts -
      - get <name> <infile> <outfile> <from> - <to> <device> <querytype> <xaxis> <yaxis> <savename> -

      - Query the Database to retrieve JSON-Formatted Data, which is used by the charting frontend. -
      - -
        -
      • <name>
        - The name of the defined DbLog, like it is given in fhem.cfg.
      • -
      • <in>
        - A dummy parameter for FileLog compatibility. Always set to -
      • -
      • <out>
        - A dummy parameter for FileLog compatibility. Set it to webchart - to use the charting related get function. -
      • -
      • <from> / <to>
        - Used to select the data. Please use the following timeformat:
        -
          YYYY-MM-DD_HH24:MI:SS
      • -
      • <device>
        - A string which represents the device to query.
      • -
      • <querytype>
        - A string which represents the method the query should use. Actually supported values are:
        - getreadings to retrieve the possible readings for a given device
        - getdevices to retrieve all available devices
        - timerange to retrieve charting data, which requires a given xaxis, yaxis, device, to and from
        - savechart to save a chart configuration in the database. Requires a given xaxis, yaxis, device, to and from, and a 'savename' used to save the chart
        - deletechart to delete a saved chart. Requires a given id which was set on save of the chart
        - getcharts to get a list of all saved charts.
        - getTableData to get jsonformatted data from the database. Uses paging Parameters like start and limit.
        - hourstats to get statistics for a given value (yaxis) for an hour.
        - daystats to get statistics for a given value (yaxis) for a day.
        - weekstats to get statistics for a given value (yaxis) for a week.
        - monthstats to get statistics for a given value (yaxis) for a month.
        - yearstats to get statistics for a given value (yaxis) for a year.
        -
      • -
      • <xaxis>
        - A string which represents the xaxis
      • -
      • <yaxis>
        - A string which represents the yaxis
      • -
      • <savename>
        - A string which represents the name a chart will be saved with
      • -
      • <chartconfig>
        - A jsonstring which represents the chart to save
      • -
      • <pagingstart>
        - An integer used to determine the start for the sql used for query 'getTableData'
      • -
      • <paginglimit>
        - An integer used to set the limit for the sql used for query 'getTableData'
      • -
      -

      - Examples: -
        -
      • get logdb - webchart "" "" "" getcharts
        - Retrieves all saved charts from the Database
      • -
      • get logdb - webchart "" "" "" getdevices
        - Retrieves all available devices from the Database
      • -
      • get logdb - webchart "" "" ESA2000_LED_011e getreadings
        - Retrieves all available Readings for a given device from the Database
      • -
      • get logdb - webchart 2013-02-11_00:00:00 2013-02-12_00:00:00 ESA2000_LED_011e timerange TIMESTAMP day_kwh
        - Retrieves charting data, which requires a given xaxis, yaxis, device, to and from
        - Will ouput a JSON like this: [{'TIMESTAMP':'2013-02-11 00:10:10','VALUE':'0.22431388090756'},{'TIMESTAMP'.....}]
      • -
      • get logdb - webchart 2013-02-11_00:00:00 2013-02-12_00:00:00 ESA2000_LED_011e savechart TIMESTAMP day_kwh tageskwh
        - Will save a chart in the database with the given name and the chart configuration parameters
      • -
      • get logdb - webchart "" "" "" deletechart "" "" 7
        - Will delete a chart from the database with the given id
      • -
      -

      -
    - - - Attributes -

    - -
      addStateEvent -
        - attr <device> addStateEvent [0|1] -
        - As you probably know the event associated with the state Reading is special, as the "state: " - string is stripped, i.e event is not "state: on" but just "on".
        - Mostly it is desireable to get the complete event without "state: " stripped, so it is the default behavior of DbLog. - That means you will get state-event complete as "state: xxx".
        - In some circumstances, e.g. older or special modules, it is a good idea to set addStateEvent to "0". - Try it if you have trouble with the default adjustment. -
        -
      -
    -
    - -
      asyncMode -
        - attr <device> asyncMode [1|0] -
        - - This attribute determines the operation mode of DbLog. If asynchronous mode is active (asyncMode=1), the events which should be saved - at first will be cached in memory. After synchronisation time cycle (attribute syncInterval), or if the count limit of datasets in cache - is reached (attribute cacheLimit), the cached events get saved into the database using bulk insert. - If the database isn't available, the events will be cached in memeory furthermore, and tried to save into database again after - the next synchronisation time cycle if the database is available.
        - In asynchronous mode the data insert into database will be executed non-blocking by a background process. - You can adjust the timeout value for this background process by attribute "timeout" (default 86400s).
        - In synchronous mode (normal mode) the events won't be cached im memory and get saved into database immediately. If the database isn't - available the events are get lost.
        -
      -
    -
    - -
      commitMode -
        - attr <device> commitMode [basic_ta:on | basic_ta:off | ac:on_ta:on | ac:on_ta:off | ac:off_ta:on] -
        - - Change the usage of database autocommit- and/or transaction- behavior.
        - If transaction "off" is used, not saved datasets are not returned to cache in asynchronous mode.
        - This attribute is an advanced feature and should only be used in a concrete situation or support case.

        - -
          -
        • basic_ta:on - autocommit server basic setting / transaktion on (default)
        • -
        • basic_ta:off - autocommit server basic setting / transaktion off
        • -
        • ac:on_ta:on - autocommit on / transaktion on
        • -
        • ac:on_ta:off - autocommit on / transaktion off
        • -
        • ac:off_ta:on - autocommit off / transaktion on (autocommit "off" set transaktion "on" implicitly)
        • -
        - -
      -
    -
    - -
      cacheEvents -
        - attr <device> cacheEvents [2|1|0] -
        -
          -
        • cacheEvents=1: creates events of reading CacheUsage at point of time when a new dataset has been added to the cache.
        • -
        • cacheEvents=2: creates events of reading CacheUsage at point of time when in aychronous mode a new write cycle to the - database starts. In that moment CacheUsage contains the number of datasets which will be written to - the database.

        • -
        -
      -
    -
    - -
      cacheLimit -
        - - attr <device> cacheLimit <n> -
        - - In asynchronous logging mode the content of cache will be written into the database and cleared if the number <n> datasets - in cache has reached (default: 500). Thereby the timer of asynchronous logging mode will be set new to the value of - attribute "syncInterval". In case of error the next write attempt will be started at the earliest after syncInterval/2.
        -
      -
    -
    - -
      colEvent -
        - - attr <device> colEvent <n> -
        - - The field length of database field EVENT will be adjusted. By this attribute the default value in the DbLog-device can be - adjusted if the field length in the databse was changed nanually. If colEvent=0 is set, the database field - EVENT won't be filled .
        - Note:
        - If the attribute is set, all of the field length limits are valid also for SQLite databases as noticed in Internal COLUMNS !
        -
      -
    -
    - -
      colReading -
        - - attr <device> colReading <n> -
        - - The field length of database field READING will be adjusted. By this attribute the default value in the DbLog-device can be - adjusted if the field length in the databse was changed nanually. If colReading=0 is set, the database field - READING won't be filled .
        - Note:
        - If the attribute is set, all of the field length limits are valid also for SQLite databases as noticed in Internal COLUMNS !
        -
      -
    -
    - -
      colValue -
        - - attr <device> colValue <n> -
        - - The field length of database field VALUE will be adjusted. By this attribute the default value in the DbLog-device can be - adjusted if the field length in the databse was changed nanually. If colEvent=0 is set, the database field - VALUE won't be filled .
        - Note:
        - If the attribute is set, all of the field length limits are valid also for SQLite databases as noticed in Internal COLUMNS !
        -
      -
    -
    - -
      DbLogType -
        - - attr <device> DbLogType [Current|History|Current/History] -
        - - This attribute determines which table or which tables in the database are wanted to use. If the attribute isn't set, - the adjustment history will be used as default.
        - - - The meaning of the adjustments in detail are:

        - -
          - - - - - - -
          Current Events are only logged into the current-table. - The entries of current-table will evaluated with SVG-creation.
          History Events are only logged into the history-table. No dropdown list with proposals will created with the - SVG-creation.
          Current/History Events will be logged both the current- and the history-table. - The entries of current-table will evaluated with SVG-creation.
          SampleFill/History Events are only logged into the history-table. The entries of current-table will evaluated with SVG-creation - and can be filled up with a customizable extract of the history-table by using a - DbRep-device command - "set <DbRep-name> tableCurrentFillup" (advanced feature).
          -
        -
        -
        - - Note:
        - The current-table has to be used to get a Device:Reading-DropDown list when a SVG-Plot will be created.
        -
      -
    -
    - -
      DbLogSelectionMode -
        - - attr <device> DbLogSelectionMode [Exclude|Include|Exclude/Include] -
        - - Thise DbLog-Device-Attribute specifies how the device specific Attributes DbLogExclude and DbLogInclude are handled. - If this Attribute is missing it defaults to "Exclude". -
          -
        • Exclude: DbLog behaves just as usual. This means everything specified in the regex in DEF will be logged by default and anything excluded - via the DbLogExclude attribute will not be logged
        • -
        • Include: Nothing will be logged, except the readings specified via regex in the DbLogInclude attribute - (in source devices). - Neither the Regex set in DEF will be considered nor the device name of the source device itself.
        • -
        • Exclude/Include: Just almost the same as Exclude, but if the reading matches the DbLogExclude attribute, then - it will further be checked against the regex in DbLogInclude whicht may possibly re-include the already - excluded reading.
        • -
        -
      -
    -
    - -
      DbLogInclude -
        - - attr <device> DbLogInclude regex:MinInterval,[regex:MinInterval] ... -
        - - A new Attribute DbLogInclude will be propagated to all Devices if DBLog is used. - DbLogInclude works just like DbLogExclude but to include matching readings. - See also DbLogSelectionMode-Attribute of DbLog-Device which takes influence on - on how DbLogExclude and DbLogInclude are handled.
        - - Example
        - attr MyDevice1 DbLogInclude .*
        - attr MyDevice2 DbLogInclude state,(floorplantext|MyUserReading):300,battery:3600 -
      -
    -
    - -
      DbLogExclude -
        - - attr <device> DbLogExclude regex:MinInterval,[regex:MinInterval] ... -
        - - A new Attribute DbLogExclude will be propagated to all Devices if DBLog is used. - DbLogExclude will work as regexp to exclude defined readings to log. Each individual regexp-group are separated by comma. - If a MinInterval is set, the logentry is dropped if the defined interval is not reached and value vs. lastvalue is eqal. -

        - - Example
        - attr MyDevice1 DbLogExclude .*
        - attr MyDevice2 DbLogExclude state,(floorplantext|MyUserReading):300,battery:3600 -
      -
    -
    - -
      excludeDevs -
        - - attr <device> excludeDevs <devspec1>[#Reading],<devspec2>[#Reading],<devspec...> -
        - - The device/reading-combinations "devspec1#Reading", "devspec2#Reading" up to "devspec.." are globally excluded from - logging into the database.
        - The specification of a reading is optional.
        - Thereby devices are explicit and consequently excluded from logging without consideration of another excludes or - includes (e.g. in DEF). - The devices to exclude can be specified as device-specification. -

        - - Examples
        - - attr <device> excludeDevs global,Log.*,Cam.*,TYPE=DbLog -
        - # The devices global respectively devices starting with "Log" or "Cam" and devices with Type=DbLog are excluded from database logging.
        - - attr <device> excludeDevs .*#.*Wirkleistung.* -
        - # All device/reading-combinations which contain "Wirkleistung" in reading are excluded from logging.
        - - attr <device> excludeDevs SMA_Energymeter#Bezug_WirkP_Zaehler_Diff -
        - # The event containing device "SMA_Energymeter" and reading "Bezug_WirkP_Zaehler_Diff" are excluded from logging.
        -
      -
    -
    - -
      expimpdir -
        - - attr <device> expimpdir <directory> -
        - - If the cache content will be exported by "exportCache" or the "importCachefile" - command, the file will be written into or read from that directory. The default directory is - "(global->modpath)/log/". - Make sure the specified directory is existing and writable.

        - - Example
        - - attr <device> expimpdir /opt/fhem/cache/ -
        -
      -
    -
    - -
      exportCacheAppend -
        - - attr <device> exportCacheAppend [1|0] -
        - - If set, the export of cache ("set <device> exportCache") appends the content to the newest available - export file. If there is no exististing export file, it will be new created.
        - If the attribute not set, every export process creates a new export file . (default)
        -
      -
    -
    - -
      noNotifyDev -
        - - attr <device> noNotifyDev [1|0] -
        - - Enforces that NOTIFYDEV won't set and hence won't used.
        -
      -
    -
    - -
      noSupportPK -
        - - attr <device> noSupportPK [1|0] -
        - - Deactivates the support of a set primary key by the module.
        -
      -
    -
    - -
      syncEvents -
        - attr <device> syncEvents [1|0] -
        - - events of reading syncEvents will be created.
        -
      -
    -
    - -
      shutdownWait -
        - attr <device> shutdownWait -
        - causes fhem shutdown to wait n seconds for pending database commit
        -
      -
    -
    - -
      showproctime -
        - attr <device> [1|0] -
        - - If set, the reading "sql_processing_time" shows the required execution time (in seconds) for the sql-requests. This is not calculated - for a single sql-statement, but the summary of all sql-statements necessary for within an executed DbLog-function in background. - The reading "background_processing_time" shows the total time used in background.
        -
      -
    -
    - -
      showNotifyTime -
        - attr <device> showNotifyTime [1|0] -
        - - If set, the reading "notify_processing_time" shows the required execution time (in seconds) in the DbLog - Notify-function. This attribute is practical for performance analyses and helps to determine the differences of time - required when the operation mode was switched from synchronous to the asynchronous mode.
        - -
      -
    -
    - -
      syncInterval -
        - attr <device> syncInterval <n> -
        - - If DbLog is set to asynchronous operation mode (attribute asyncMode=1), with this attribute you can setup the interval in seconds - used for storage the in memory cached events into the database. THe default value is 30 seconds.
        - -
      -
    -
    - -
      suppressAddLogV3 -
        - attr <device> suppressAddLogV3 [1|0] -
        - - If set, verbose3-Logfileentries done by the addLog-function will be suppressed.
        -
      -
    -
    - -
      suppressUndef -
        - - attr <device> ignoreUndef -
        - suppresses all undef values when returning data from the DB via get
        - - Example
        - #DbLog eMeter:power:::$val=($val>1500)?undef:$val -
      -
    -
    - -
      timeout -
        - - attr <device> timeout -
        - setup timeout of the write cycle into database in asynchronous mode (default 86400s)
        -
      -
    -
    - -
      useCharfilter -
        - - attr <device> useCharfilter [0|1] -
        - If set, only ASCII characters from 32 to 126 are accepted in event. - That are the characters " A-Za-z0-9!"#$%&'()*+,-.\/:;<=>?@[\\]^_`{|}~" .
        - Mutated vowel and "€" are transcribed (e.g. ä to ae). (default: 0).
        -
      -
    -
    - -
      valueFn -
        - - attr <device> valueFn {} -
        - - Perl expression that can use and change values of $TIMESTAMP, $DEVICE, $DEVICETYPE, $READING, $VALUE (value of reading) and - $UNIT (unit of reading value). - It also has readonly-access to $EVENT for evaluation in your expression.
        - If $TIMESTAMP should be changed, it must meet the condition "yyyy-mm-dd hh:mm:ss", otherwise the $timestamp wouldn't - be changed. - In addition you can set the variable $IGNORE=1 if you want skip a dataset from logging.

        - - Examples
        - - attr <device> valueFn {if ($DEVICE eq "living_Clima" && $VALUE eq "off" ){$VALUE=0;} elsif ($DEVICE eq "e-power"){$VALUE= sprintf "%.1f", $VALUE;}} -
        - # change value "off" to "0" of device "living_Clima" and rounds value of e-power to 1f

        - - attr <device> valueFn {if ($DEVICE eq "SMA_Energymeter" && $READING eq "state"){$IGNORE=1;}} -
        - # don't log the dataset of device "SMA_Energymeter" if the reading is "state"

        - - attr <device> valueFn {if ($DEVICE eq "Dum.Energy" && $READING eq "TotalConsumption"){$UNIT="W";}} -
        - # set the unit of device "Dum.Energy" to "W" if reading is "TotalConsumption"

        -
      -
    -
    - -
      verbose4Devs -
        - - attr <device> verbose4Devs <device1>,<device2>,<device..> -
        - - If verbose level 4 is used, only output of devices set in this attribute will be reported in FHEM central logfile. If this attribute - isn't set, output of all relevant devices will be reported if using verbose level 4. - The given devices are evaluated as Regex.
        - - Example
        - - attr <device> verbose4Devs sys.*,.*5000.*,Cam.*,global -
        - # The devices starting with "sys", "Cam" respectively devices are containing "5000" in its name and the device "global" will be reported in FHEM - central Logfile if verbose=4 is set.
        -
      -
    -
    - -
- -=end html -=begin html_DE - - -

DbLog

-
    -
    - Mit DbLog werden Events in einer Datenbank gespeichert. Es wird SQLite, MySQL/MariaDB und PostgreSQL unterstützt.

    - - Voraussetzungen

    - - Die Perl-Module DBI und DBD::<dbtype> müssen installiert werden (use cpan -i <module> - falls die eigene Distribution diese nicht schon mitbringt). -

    - - Auf einem Debian-System können diese Module z.Bsp. installiert werden mit:

    - -
      - - - - - - -
      DBI : sudo apt-get install libdbi-perl
      MySQL : sudo apt-get install [mysql-server] mysql-client libdbd-mysql libdbd-mysql-perl (mysql-server nur bei lokaler MySQL-Server-Installation)
      SQLite : sudo apt-get install sqlite3 libdbi-perl libdbd-sqlite3-perl
      PostgreSQL : sudo apt-get install libdbd-pg-perl
      -
    -
    -
    - - Vorbereitungen

    - - Zunächst muss die Datenbank angelegt werden.
    - Beispielcode bzw. Scripts zum Erstellen einer MySQL/PostgreSQL/SQLite Datenbank ist im - SVN -> contrib/dblog/db_create_<DBType>.sql - enthalten.
    - (Achtung: Die lokale FHEM-Installation enthält im Unterverzeichnis ./contrib/dblog nicht die aktuellsten - Scripte !!)

    - - Die Datenbank beinhaltet 2 Tabellen: current und history.
    - Die Tabelle current enthält den letzten Stand pro Device und Reading.
    - In der Tabelle history sind alle Events historisch gespeichert.
    - Beachten sie bitte unbedingt das Attribut DbLogType um die Benutzung der Tabellen - current und history festzulegen. -

    - - Die Tabellenspalten haben folgende Bedeutung:

    - -
      - - - - - - - - - -
      TIMESTAMP : Zeitpunkt des Events, z.B. 2007-12-30 21:45:22
      DEVICE : Name des Devices, z.B. Wetterstation
      TYPE : Type des Devices, z.B. KS300
      EVENT : das auftretende Event als volle Zeichenkette, z.B. humidity: 71 (%)
      READING : Name des Readings, ermittelt aus dem Event, z.B. humidity
      VALUE : aktueller Wert des Readings, ermittelt aus dem Event, z.B. 71
      UNIT : Einheit, ermittelt aus dem Event, z.B. %
      -
    -
    -
    - - Index anlegen
    - Für die Leseperformance, z.B. bei der Erstellung von SVG-PLots, ist es von besonderer Bedeutung dass der Index "Search_Idx" - oder ein vergleichbarer Index (z.B. ein Primary Key) angelegt ist.

    - - Der Index "Search_Idx" kann mit diesen Statements, z.B. in der Datenbank 'fhem', angelegt werden (auch nachträglich):

    - -
      - - - - - -
      MySQL : CREATE INDEX Search_Idx ON `fhem`.`history` (DEVICE, READING, TIMESTAMP);
      SQLite : CREATE INDEX Search_Idx ON `history` (DEVICE, READING, TIMESTAMP);
      PostgreSQL : CREATE INDEX "Search_Idx" ON history USING btree (device, reading, "timestamp");
      -
    -
    - - Der Code zur Anlage ist ebenfalls in den Scripten - SVN -> contrib/dblog/db_create_<DBType>.sql - enthalten.

    - - Für die Verbindung zur Datenbank wird eine Konfigurationsdatei verwendet. - Die Konfiguration ist in einer sparaten Datei abgelegt um das Datenbankpasswort nicht in Klartext in der - FHEM-Haupt-Konfigurationsdatei speichern zu müssen. - Ansonsten wäre es mittels des list Befehls einfach auslesbar. -

    - - Die Konfigurationsdatei wird z.B. nach /opt/fhem kopiert und hat folgenden Aufbau, den man an seine Umgebung - anpassen muß (entsprechende Zeilen entkommentieren und anpassen):

    - -
    -    ####################################################################################
    -    # database configuration file     
    -    # 	
    -    # NOTE:
    -    # If you don't use a value for user / password please delete the leading hash mark
    -    # and write 'user => ""' respectively 'password => ""' instead !	
    -    #
    -    #
    -    ## for MySQL                                                      
    -    ####################################################################################
    -    #%dbconfig= (                                                    
    -    #    connection => "mysql:database=fhem;host=<database host>;port=3306",    
    -    #    user => "fhemuser",                                          
    -    #    password => "fhempassword",
    -    #    # optional enable(1) / disable(0) UTF-8 support (at least V 4.042 is necessary) 	
    -    #    utf8 => 1   
    -    #);                                                              
    -    ####################################################################################
    -    #                                                                
    -    ## for PostgreSQL                                                
    -    ####################################################################################
    -    #%dbconfig= (                                                   
    -    #    connection => "Pg:database=fhem;host=<database host>",        
    -    #    user => "fhemuser",                                     
    -    #    password => "fhempassword"                              
    -    #);                                                              
    -    ####################################################################################
    -    #                                                                
    -    ## for SQLite (username and password stay empty for SQLite)      
    -    ####################################################################################
    -    #%dbconfig= (                                                   
    -    #    connection => "SQLite:dbname=/opt/fhem/fhem.db",        
    -    #    user => "",                                             
    -    #    password => ""                                          
    -    #);                                                              
    -    ####################################################################################
    -	
    - Wird configDB genutzt, ist das Konfigurationsfile in die configDB hochzuladen !

    - - Hinweis zu Sonderzeichen:
    - Werden Sonderzeichen, wie z.B. @, $ oder %, welche eine programmtechnische Bedeutung in Perl haben im Passwort verwendet, - sind diese Zeichen zu escapen. - Das heißt in diesem Beispiel wäre zu verwenden: \@,\$ bzw. \%. -
    -
    -
    - - - Define -
      -
      - - define <name> DbLog <configfilename> <regexp> -

      - - <configfilename> ist die vorbereitete Konfigurationsdatei.
      - <regexp> ist identisch FileLog der Filelog-Definition. -

      - - Beispiel: -
        - define myDbLog DbLog /etc/fhem/db.conf .*:.*
        - speichert alles in der Datenbank -
      -
      - - Nachdem das DbLog-Device definiert wurde, ist empfohlen einen Konfigurationscheck auszuführen:

      -
        - set <name> configCheck
        -
      -
      - Dieser Check prüft einige wichtige Einstellungen des DbLog-Devices und gibt Empfehlungen für potentielle Verbesserungen. -

      -
      - - DbLog unterscheidet den synchronen (Default) und asynchronen Logmodus. Der Logmodus ist über das - Attribut asyncMode einstellbar. Ab Version 2.13.5 unterstützt DbLog einen gesetzten - Primary Key (PK) in den Tabellen Current und History. Soll PostgreSQL mit PK genutzt werden, muss PostgreSQL mindestens - Version 9.5 sein. -

      - - Der gespeicherte Wert des Readings wird optimiert für eine automatisierte Nachverarbeitung, z.B. yes wird transformiert - nach 1.

      - - Die gespeicherten Werte können mittels GET Funktion angezeigt werden: -
        - get myDbLog - - 2012-11-10 2012-11-10 KS300:temperature -
      -
      - - FileLog-Dateien nach DbLog übertragen

      - Zur Übertragung von vorhandenen Filelog-Daten in die DbLog-Datenbank steht das spezielle Modul 98_FileLogConvert.pm - zur Verfügung.
      - Dieses Modul kann hier - bzw. aus dem Verzeichnis ./contrib geladen werden. - Weitere Informationen und Hilfestellung gibt es im entsprechenden - Forumthread .


      - - Reporting und Management von DbLog-Datenbankinhalten

      - Mit Hilfe SVG können Datenbankinhalte visualisiert werden.
      - Darüber hinaus kann das Modul DbRep genutzt werden um tabellarische - Datenbankauswertungen anzufertigen oder den Datenbankinhalt mit den zur Verfügung stehenden Funktionen zu verwalten. -


      - - Troubleshooting

      - Wenn nach der erfolgreichen Definition das DbLog-Device nicht wie erwartet arbeitet, - können folgende Hinweise hilfreich sein:

      - -
        -
      • Wurden die vorbereitenden Schritte gemacht, die in der commandref beschrieben sind ? (Softwarekomponenten installieren, Tabellen, Index anlegen)
      • -
      • Wurde ein "set <name> configCheck" nach dem Define durchgeführt und eventuelle Fehler beseitigt bzw. Empfehlungen umgesetzt ?
      • -
      • Falls configDB in Benutzung ... wurde das DB-Konfigurationsfile in configDB importiert (z.B. mit "configDB fileimport ./db.conf") ?
      • -
      • Beim Anlegen eines SVG-Plots erscheint keine Drop-Down Liste mit Vorschlagswerten -> Attribut "DbLogType" auf "Current/History" setzen.
      • -
      -
      - - Sollten diese Hinweise nicht zum Erfolg führen, bitte den verbose-Level im DbLog Device auf 4 oder 5 hochsetzen und - die Einträge bezüglich des DbLog-Device im Logfile beachten. - - Zur Problemanalyse bitte die Ausgabe von "list <name>", das Ergebnis von "set <name> configCheck" und die - Ausgaben des DbLog-Device im Logfile im Forumthread posten. -

      - -
    -
    -
    - - - - Set -
      - set <name> addCacheLine YYYY-MM-DD HH:MM:SS|<device>|<type>|<event>|<reading>|<value>|[<unit>]

      -
        Im asynchronen Modus wird ein neuer Datensatz in den Cache eingefügt und beim nächsten Synclauf mit abgearbeitet. -

        - - Beispiel:
        - set <name> addCacheLine 2017-12-05 17:03:59|MaxBathRoom|MAX|valveposition: 95|valveposition|95|%
        -

      - - set <name> addLog <devspec>:<Reading> [Value] [CN=<caller name>] [!useExcludes]

      -
        Fügt einen zusatzlichen Logeintrag einer Device/Reading-Kombination in die Datenbank ein.

        - -
          -
        • <devspec>:<Reading> - Das Device kann als Geräte-Spezifikation angegeben werden.
          - Die Angabe von "Reading" wird als regulärer Ausdruck ausgewertet. Ist - das Reading nicht vorhanden und der Wert "Value" angegeben, wird das Reading - in die DB eingefügt wenn es kein regulärer Ausdruck und ein valider - Readingname ist.
        • -
        • Value - Optional kann "Value" für den Readingwert angegeben werden. Ist Value nicht angegeben, wird der aktuelle - Wert des Readings in die DB eingefügt.
        • -
        • CN=<caller name> - Mit dem Schlüssel "CN=" (Caller Name) kann dem addLog-Aufruf ein String, - z.B. der Name des aufrufenden Devices (z.B. eines at- oder notify-Devices), mitgegeben - werden. Mit Hilfe der im Attribut "valueFn" hinterlegten - Funktion kann dieser Schlüssel über die Variable $CN ausgewertet werden. Dadurch ist es - möglich, das Verhalten des addLogs abhängig von der aufrufenden Quelle zu beeinflussen. -
        • -
        • !useExcludes - Ein eventuell im Quell-Device gesetztes Attribut "DbLogExclude" wird von der Funktion berücksichtigt. Soll dieses - Attribut nicht berücksichtigt werden, kann das Schüsselwort "!useExcludes" verwendet werden.
        • -
        -
        - - Das Datenbankfeld "EVENT" wird automatisch mit "addLog" belegt.
        - Es wird KEIN zusätzlicher Event im System erzeugt !

        - - Beispiele:
        - set <name> addLog SMA_Energymeter:Bezug_Wirkleistung
        - set <name> addLog TYPE=SSCam:state
        - set <name> addLog MyWetter:(fc10.*|fc8.*)
        - set <name> addLog MyWetter:(wind|wind_ch.*) 20 !useExcludes
        - set <name> addLog TYPE=CUL_HM:FILTER=model=HM-CC-RT-DN:FILTER=subType!=(virtual|):(measured-temp|desired-temp|actuator)

        - - set <name> addLog USV:state CN=di.cronjob
        - In der valueFn-Funktion wird der Aufrufer "di.cronjob" über die Variable $CN ausgewertet und davon abhängig der - Timestamp dieses addLog korrigiert:

        - valueFn = if($CN eq "di.cronjob" and $TIMESTAMP =~ m/\s00:00:[\d:]+/) { $TIMESTAMP =~ s/\s([^\s]+)/ 23:59:59/ } - -

      - - set <name> clearReadings

      -
        Leert Readings die von verschiedenen DbLog-Funktionen angelegt wurden.

      - - set <name> eraseReadings

      -
        Löscht alle Readings außer dem Reading "state".

      - - set <name> commitCache

      -
        Im asynchronen Modus (Attribut asyncMode=1), werden die im Speicher gecachten Daten in die Datenbank geschrieben - und danach der Cache geleert. Der interne Timer des asynchronen Modus wird dabei neu gesetzt. - Der Befehl kann nützlich sein um manuell oder z.B. über ein AT den Cacheinhalt zu einem definierten Zeitpunkt in die - Datenbank zu schreiben.

      - - set <name> configCheck

      -
        Es werden einige wichtige Einstellungen geprüft und Empfehlungen gegeben falls potentielle Verbesserungen - identifiziert wurden. -

      - - set <name> count

      -
        Zählt die Datensätze in den Tabellen current und history und schreibt die Ergebnisse in die Readings - countCurrent und countHistory.

      - - set <name> countNbl

      -
        - Die non-blocking Ausführung von "set <name> count". -

        - - Hinweis:
        - Obwohl die Funktion selbst non-blocking ist, muß das DbLog-Device im asynchronen Modus betrieben werden (asyncMode = 1) - um FHEM nicht zu blockieren ! -

      - - set <name> deleteOldDays <n>

      -
        Löscht Datensätze in Tabelle history, die älter sind als <n> Tage sind. - Die Anzahl der gelöschten Datensätze wird in das Reading lastRowsDeleted geschrieben.

      - - set <name> deleteOldDaysNbl <n>

      -
        - Identisch zu Funktion "deleteOldDays" wobei deleteOldDaysNbl nicht blockierend ausgeführt wird. -

        - - Hinweis:
        - Obwohl die Funktion selbst non-blocking ist, muß das DbLog-Device im asynchronen Modus betrieben werden (asyncMode = 1) - um FHEM nicht zu blockieren ! -

      - - - set <name> exportCache [nopurge | purgecache]

      -
        Wenn DbLog im asynchronen Modus betrieben wird, kann der Cache mit diesem Befehl in ein Textfile geschrieben - werden. Das File wird per Default in dem Verzeichnis (global->modpath)/log/ erstellt. Das Zielverzeichnis kann mit - dem Attribut "expimpdir" geändert werden.
        - Der Name des Files wird automatisch generiert und enthält den Präfix "cache_", gefolgt von dem DbLog-Devicenamen und - dem aktuellen Zeitstempel, z.B. "cache_LogDB_2017-03-23_22-13-55".
        - Mit den Optionen "nopurge" bzw. "purgecache" wird festgelegt, ob der Cacheinhalt nach dem Export gelöscht werden - soll oder nicht. Mit "nopurge" (default) bleibt der Cacheinhalt erhalten.
        - Das Attribut "exportCacheAppend" bestimmt dabei, ob mit jedem Exportvorgang ein neues Exportfile - angelegt wird (default) oder der Cacheinhalt an das bestehende (neueste) Exportfile angehängt wird. -

      - - set <name> importCachefile <file>

      -
        Importiert ein mit "exportCache" geschriebenes File in die Datenbank. - Die verfügbaren Dateien werden per Default im Verzeichnis (global->modpath)/log/ gesucht und eine Drop-Down Liste - erzeugt sofern Dateien gefunden werden. Das Quellverzeichnis kann mit dem Attribut expimpdir geändert werden.
        - Es werden nur die Dateien angezeigt, die dem Muster "cache_", gefolgt von dem DbLog-Devicenamen entsprechen.
        - Zum Beispiel "cache_LogDB_2017-03-23_22-13-55", falls das Log-Device "LogDB" heißt.
        - Nach einem erfolgreichen Import wird das File mit dem Präfix "impdone_" versehen und erscheint dann nicht mehr - in der Drop-Down Liste. Soll ein Cachefile in eine andere als der Quelldatenbank importiert werden, kann das - DbLog-Device im Filenamen angepasst werden damit dieses File den Suchktiterien entspricht und in der Drop-Down Liste - erscheint.

      - - set <name> listCache

      -
        Wenn DbLog im asynchronen Modus betrieben wird (Attribut asyncMode=1), können mit diesem Befehl die im Speicher gecachten Events - angezeigt werden.

      - - set <name> purgeCache

      -
        Im asynchronen Modus (Attribut asyncMode=1), werden die im Speicher gecachten Daten gelöscht. - Es werden keine Daten aus dem Cache in die Datenbank geschrieben.

      - - set <name> reduceLog <no>[:<nn>] [average[=day]] [exclude=device1:reading1,device2:reading2,...]

      -
        Reduziert historische Datensätze, die älter sind als <no> Tage und (optional) neuer sind als <nn> Tage - auf einen Eintrag (den ersten) pro Stunde je Device & Reading.
        - Innerhalb von device/reading können SQL-Wildcards "%" und "_" verwendet werden.

        - - Das Reading "reduceLogState" zeigt den Ausführungsstatus des letzten reduceLog-Befehls.

        - Durch die optionale Angabe von 'average' wird nicht nur die Datenbank bereinigt, sondern alle numerischen Werte - einer Stunde werden auf einen einzigen Mittelwert reduziert.
        - Durch die optionale Angabe von 'average=day' wird nicht nur die Datenbank bereinigt, sondern alle numerischen - Werte eines Tages auf einen einzigen Mittelwert reduziert. (impliziert 'average')

        - - Optional kann als letzer Parameter "exclude=device1:reading1,device2:reading2,...." - angegeben werden um device/reading Kombinationen von reduceLog auszuschließen.

        - - Optional kann als letzer Parameter "include=device:reading" angegeben werden um - die auf die Datenbank ausgeführte SELECT-Abfrage einzugrenzen, was die RAM-Belastung verringert und die - Performance erhöht.

        - -
          - Beispiel:
          - set <name> reduceLog 270 average include=Luftdaten_remote:%
          - -
        -
        - - ACHTUNG: Es wird dringend empfohlen zu überprüfen ob der standard INDEX 'Search_Idx' in der Tabelle 'history' existiert!
        - Die Abarbeitung dieses Befehls dauert unter Umständen (ohne INDEX) extrem lange. FHEM wird durch den Befehl bis - zur Fertigstellung komplett blockiert !

        - -

      - - set <name> reduceLogNbl <no>[:<nn>] [average[=day]] [exclude=device1:reading1,device2:reading2,...]

      -
        - Führt die gleiche Funktion wie "set <name> reduceLog" aus. Im Gegensatz zu reduceLog wird mit FHEM wird durch den Befehl reduceLogNbl nicht - mehr blockiert da diese Funktion non-blocking implementiert ist ! -

        - - Hinweis:
        - Obwohl die Funktion selbst non-blocking ist, muß das DbLog-Device im asynchronen Modus betrieben werden (asyncMode = 1) - um FHEM nicht zu blockieren ! -

      - - set <name> reopen [n]

      -
        Schließt die Datenbank und öffnet sie danach sofort wieder wenn keine Zeit [n] in Sekunden angegeben wurde. - Dabei wird die Journaldatei geleert und neu angelegt.
        - Verbessert den Datendurchsatz und vermeidet Speicherplatzprobleme.
        - Wurde eine optionale Verzögerungszeit [n] in Sekunden angegeben, wird die Verbindung zur Datenbank geschlossen und erst - nach Ablauf von [n] Sekunden wieder neu verbunden. - Im synchronen Modus werden die Events in dieser Zeit nicht gespeichert. - Im asynchronen Modus werden die Events im Cache gespeichert und nach dem Reconnect in die Datenbank geschrieben.

      - - set <name> rereadcfg

      -
        Schließt die Datenbank und öffnet sie danach sofort wieder. Dabei wird die Journaldatei geleert und neu angelegt.
        - Verbessert den Datendurchsatz und vermeidet Speicherplatzprobleme.
        - Zwischen dem Schließen der Verbindung und dem Neuverbinden werden die Konfigurationsdaten neu gelesen

      - - set <name> userCommand <validSqlStatement>

      -
        BENUTZE DIESE FUNKTION NUR, WENN DU WIRKLICH (WIRKLICH!) WEISST, WAS DU TUST!!!

        - Führt einen beliebigen (!!!) sql Befehl in der Datenbank aus. Der Befehl und ein zurückgeliefertes - Ergebnis wird in das Reading "userCommand" bzw. "userCommandResult" geschrieben. Das Ergebnis kann nur - einzeilig sein. - Für SQL-Statements, die mehrzeilige Ergebnisse liefern, kann das Auswertungsmodul - DbRep genutzt werden.
        - Wird von der Datenbankschnittstelle kein Ergebnis (undef) zurückgeliefert, erscheint die Meldung "no result" - im Reading "userCommandResult". -

      - -

    - - - - Get -
      - get <name> ReadingsVal       <device> <reading> <default>
      - get <name> ReadingsTimestamp <device> <reading> <default>
      -
      - Liest einen einzelnen Wert aus der Datenbank, Benutzung und Syntax sind weitgehend identisch zu ReadingsVal() und ReadingsTimestamp().
      -
    -
    -
    -
      - get <name> <infile> <outfile> <from> - <to> <column_spec> -

      - Liesst Daten aus der Datenbank. Wird durch die Frontends benutzt um Plots - zu generieren ohne selbst auf die Datenank zugreifen zu müssen. -
      -
        -
      • <in>
        - Ein Parameter um eine Kompatibilität zum Filelog herzustellen. - Dieser Parameter ist per default immer auf - zu setzen.
        - Folgende Ausprägungen sind zugelassen:
        -
          -
        • current: die aktuellen Werte aus der Tabelle "current" werden gelesen.
        • -
        • history: die historischen Werte aus der Tabelle "history" werden gelesen.
        • -
        • -: identisch wie "history"
        • -
        -
      • - -
      • <out>
        - Ein Parameter um eine Kompatibilität zum Filelog herzustellen. - Dieser Parameter ist per default immer auf - zu setzen um die - Ermittlung der Daten aus der Datenbank für die Plotgenerierung zu prüfen.
        - Folgende Ausprägungen sind zugelassen:
        -
          -
        • ALL: Es werden alle Spalten der Datenbank ausgegeben. Inclusive einer Überschrift.
        • -
        • Array: Es werden alle Spalten der Datenbank als Hash ausgegeben. Alle Datensätze als Array zusammengefasst.
        • -
        • INT: intern zur Plotgenerierung verwendet
        • -
        • -: default
        • -
        -
      • - -
      • <from> / <to>
        - Wird benutzt um den Zeitraum der Daten einzugrenzen. Es ist das folgende - Zeitformat oder ein Teilstring davon zu benutzen:
        -
          YYYY-MM-DD_HH24:MI:SS
      • - -
      • <column_spec>
        - Für jede column_spec Gruppe wird ein Datenset zurückgegeben welches - durch einen Kommentar getrennt wird. Dieser Kommentar repräsentiert - die column_spec.
        - Syntax: <device>:<reading>:<default>:<fn>:<regexp>
        -
          -
        • <device>
          - Der Name des Devices. Achtung: Gross/Kleinschreibung beachten!
          - Es kann ein % als Jokerzeichen angegeben werden.
        • -
        • <reading>
          - Das Reading des angegebenen Devices zur Datenselektion.
          - Es kann ein % als Jokerzeichen angegeben werden.
          - Achtung: Gross/Kleinschreibung beachten! -
        • -
        • <default>
          - Zur Zeit noch nicht implementiert. -
        • -
        • <fn> - Angabe einer speziellen Funktion: -
            -
          • int
            - Ermittelt den Zahlenwert ab dem Anfang der Zeichenkette aus der - Spalte "VALUE". Benutzt z.B. für Ausprägungen wie 10%. -
          • -
          • int<digit>
            - Ermittelt den Zahlenwert ab dem Anfang der Zeichenkette aus der - Spalte "VALUE", inclusive negativen Vorzeichen und Dezimaltrenner. - Benutzt z.B. für Auspägungen wie -5.7°C. -
          • -
          • delta-h / delta-d
            - Ermittelt die relative Veränderung eines Zahlenwertes pro Stunde - oder pro Tag. Wird benutzt z.B. für Spalten die einen - hochlaufenden Zähler enthalten wie im Falle für ein KS300 Regenzähler - oder dem 1-wire Modul OWCOUNT. -
          • -
          • delta-ts
            - Ermittelt die vergangene Zeit zwischen dem letzten und dem aktuellen Logeintrag - in Sekunden und ersetzt damit den originalen Wert. -
          • -
        • -
        • <regexp>
          - Diese Zeichenkette wird als Perl Befehl ausgewertet. - Die regexp wird vor dem angegebenen <fn> Parameter ausgeführt. -
          - Bitte zur Beachtung: Diese Zeichenkette darf keine Leerzeichen - enthalten da diese sonst als <column_spec> Trennung - interpretiert werden und alles nach dem Leerzeichen als neue - <column_spec> gesehen wird.
          - - Schlüsselwörter -
        • $val ist der aktuelle Wert die die Datenbank für ein Device/Reading ausgibt.
        • -
        • $ts ist der aktuelle Timestamp des Logeintrages.
        • -
        • Wird als $val das Schlüsselwort "hide" zurückgegeben, so wird dieser Logeintrag nicht - ausgegeben, trotzdem aber für die Zeitraumberechnung verwendet.
        • -
        • Wird als $val das Schlüsselwort "ignore" zurückgegeben, so wird dieser Logeintrag - nicht für eine Folgeberechnung verwendet.
        • - -
      • - -
      -

      - Beispiele: -
        -
      • get myDbLog - - 2012-11-10 2012-11-20 KS300:temperature
      • - -
      • get myDbLog current ALL - - %:temperature

      • - Damit erhält man alle aktuellen Readings "temperature" von allen in der DB geloggten Devices. - Achtung: bei Nutzung von Jokerzeichen auf die history-Tabelle kann man sein FHEM aufgrund langer Laufzeit lahmlegen! - -
      • get myDbLog - - 2012-11-10_10 2012-11-10_20 KS300:temperature::int1
        - gibt Daten aus von 10Uhr bis 20Uhr am 10.11.2012
      • - -
      • get myDbLog - all 2012-11-10 2012-11-20 KS300:temperature
      • - -
      • get myDbLog - - 2012-11-10 2012-11-20 KS300:temperature KS300:rain::delta-h KS300:rain::delta-d
      • - -
      • get myDbLog - - 2012-11-10 2012-11-20 MyFS20:data:::$val=~s/(on|off).*/$1eq"on"?1:0/eg
        - gibt 1 zurück für alle Ausprägungen von on* (on|on-for-timer etc) und 0 für alle off*
      • - -
      • get myDbLog - - 2012-11-10 2012-11-20 Bodenfeuchte:data:::$val=~s/.*B:\s([-\.\d]+).*/$1/eg
        - Beispiel von OWAD: Ein Wert wie z.B.: "A: 49.527 % B: 66.647 % C: 9.797 % D: 0.097 V"
        - und die Ausgabe ist für das Reading B folgende: 2012-11-20_10:23:54 66.647
      • - -
      • get DbLog - - 2013-05-26 2013-05-28 Pumpe:data::delta-ts:$val=~s/on/hide/
        - Realisierung eines Betriebsstundenzählers. Durch delta-ts wird die Zeit in Sek zwischen den Log- - Einträgen ermittelt. Die Zeiten werden bei den on-Meldungen nicht ausgegeben welche einer Abschaltzeit - entsprechen würden.
      • -
      -

      -
    - - Get für die Nutzung von webcharts -
      - get <name> <infile> <outfile> <from> - <to> <device> <querytype> <xaxis> <yaxis> <savename> -

      - Liest Daten aus der Datenbank aus und gibt diese in JSON formatiert aus. Wird für das Charting Frontend genutzt -
      - -
        -
      • <name>
        - Der Name des definierten DbLogs, so wie er in der fhem.cfg angegeben wurde.
      • - -
      • <in>
        - Ein Dummy Parameter um eine Kompatibilität zum Filelog herzustellen. - Dieser Parameter ist immer auf - zu setzen.
      • - -
      • <out>
        - Ein Dummy Parameter um eine Kompatibilität zum Filelog herzustellen. - Dieser Parameter ist auf webchart zu setzen um die Charting Get Funktion zu nutzen. -
      • - -
      • <from> / <to>
        - Wird benutzt um den Zeitraum der Daten einzugrenzen. Es ist das folgende - Zeitformat zu benutzen:
        -
          YYYY-MM-DD_HH24:MI:SS
      • - -
      • <device>
        - Ein String, der das abzufragende Device darstellt.
      • - -
      • <querytype>
        - Ein String, der die zu verwendende Abfragemethode darstellt. Zur Zeit unterstützte Werte sind:
        - getreadings um für ein bestimmtes device alle Readings zu erhalten
        - getdevices um alle verfügbaren devices zu erhalten
        - timerange um Chart-Daten abzufragen. Es werden die Parameter 'xaxis', 'yaxis', 'device', 'to' und 'from' benötigt
        - savechart um einen Chart unter Angabe eines 'savename' und seiner zugehörigen Konfiguration abzuspeichern
        - deletechart um einen zuvor gespeicherten Chart unter Angabe einer id zu löschen
        - getcharts um eine Liste aller gespeicherten Charts zu bekommen.
        - getTableData um Daten aus der Datenbank abzufragen und in einer Tabelle darzustellen. Benötigt paging Parameter wie start und limit.
        - hourstats um Statistiken für einen Wert (yaxis) für eine Stunde abzufragen.
        - daystats um Statistiken für einen Wert (yaxis) für einen Tag abzufragen.
        - weekstats um Statistiken für einen Wert (yaxis) für eine Woche abzufragen.
        - monthstats um Statistiken für einen Wert (yaxis) für einen Monat abzufragen.
        - yearstats um Statistiken für einen Wert (yaxis) für ein Jahr abzufragen.
        -
      • - -
      • <xaxis>
        - Ein String, der die X-Achse repräsentiert
      • - -
      • <yaxis>
        - Ein String, der die Y-Achse repräsentiert
      • - -
      • <savename>
        - Ein String, unter dem ein Chart in der Datenbank gespeichert werden soll
      • - -
      • <chartconfig>
        - Ein jsonstring der den zu speichernden Chart repräsentiert
      • - -
      • <pagingstart>
        - Ein Integer um den Startwert für die Abfrage 'getTableData' festzulegen
      • - -
      • <paginglimit>
        - Ein Integer um den Limitwert für die Abfrage 'getTableData' festzulegen
      • -
      -

      - Beispiele: -
        -
      • get logdb - webchart "" "" "" getcharts
        - Liefert alle gespeicherten Charts aus der Datenbank
      • - -
      • get logdb - webchart "" "" "" getdevices
        - Liefert alle verfügbaren Devices aus der Datenbank
      • - -
      • get logdb - webchart "" "" ESA2000_LED_011e getreadings
        - Liefert alle verfügbaren Readings aus der Datenbank unter Angabe eines Gerätes
      • - -
      • get logdb - webchart 2013-02-11_00:00:00 2013-02-12_00:00:00 ESA2000_LED_011e timerange TIMESTAMP day_kwh
        - Liefert Chart-Daten, die auf folgenden Parametern basieren: 'xaxis', 'yaxis', 'device', 'to' und 'from'
        - Die Ausgabe erfolgt als JSON, z.B.: [{'TIMESTAMP':'2013-02-11 00:10:10','VALUE':'0.22431388090756'},{'TIMESTAMP'.....}]
      • - -
      • get logdb - webchart 2013-02-11_00:00:00 2013-02-12_00:00:00 ESA2000_LED_011e savechart TIMESTAMP day_kwh tageskwh
        - Speichert einen Chart unter Angabe eines 'savename' und seiner zugehörigen Konfiguration
      • - -
      • get logdb - webchart "" "" "" deletechart "" "" 7
        - Löscht einen zuvor gespeicherten Chart unter Angabe einer id
      • -
      -

      -
    - - - Attribute -

    - -
      addStateEvent -
        - attr <device> addStateEvent [0|1] -
        - Bekanntlich wird normalerweise bei einem Event mit dem Reading "state" der state-String entfernt, d.h. - der Event ist nicht zum Beispiel "state: on" sondern nur "on".
        - Meistens ist es aber hilfreich in DbLog den kompletten Event verarbeiten zu können. Deswegen übernimmt DbLog per Default - den Event inklusive dem Reading-String "state".
        - In einigen Fällen, z.B. alten oder speziellen Modulen, ist es allerdings wünschenswert den state-String wie gewöhnlich - zu entfernen. In diesen Fällen bitte addStateEvent = "0" setzen. - Versuchen sie bitte diese Einstellung, falls es mit dem Standard Probleme geben sollte. -
        -
      -
    -
    - -
      asyncMode -
        - attr <device> asyncMode [1|0] -
        - - Dieses Attribut stellt den Arbeitsmodus von DbLog ein. Im asynchronen Modus (asyncMode=1), werden die zu speichernden Events zunächst in Speicher - gecacht. Nach Ablauf der Synchronisationszeit (Attribut syncInterval) oder bei Erreichen der maximalen Anzahl der Datensätze im Cache - (Attribut cacheLimit) werden die gecachten Events im Block in die Datenbank geschrieben. - Ist die Datenbank nicht verfügbar, werden die Events weiterhin im Speicher gehalten und nach Ablauf des Syncintervalls in die Datenbank - geschrieben falls sie dann verfügbar ist.
        - Im asynchronen Mode werden die Daten nicht blockierend mit einem separaten Hintergrundprozess in die Datenbank geschrieben. - Det Timeout-Wert für diesen Hintergrundprozess kann mit dem Attribut "timeout" (Default 86400s) eingestellt werden. - Im synchronen Modus (Normalmodus) werden die Events nicht gecacht und sofort in die Datenbank geschrieben. Ist die Datenbank nicht - verfügbar gehen sie verloren.
        -
      -
    -
    - -
      commitMode -
        - attr <device> commitMode [basic_ta:on | basic_ta:off | ac:on_ta:on | ac:on_ta:off | ac:off_ta:on] -
        - - Ändert die Verwendung der Datenbank Autocommit- und/oder Transaktionsfunktionen. - Wird Transaktion "aus" verwendet, werden im asynchronen Modus nicht gespeicherte Datensätze nicht an den Cache zurück - gegeben. - Dieses Attribut ist ein advanced feature und sollte nur im konkreten Bedarfs- bzw. Supportfall geändert werden.

        - -
          -
        • basic_ta:on - Autocommit Servereinstellung / Transaktion ein (default)
        • -
        • basic_ta:off - Autocommit Servereinstellung / Transaktion aus
        • -
        • ac:on_ta:on - Autocommit ein / Transaktion ein
        • -
        • ac:on_ta:off - Autocommit ein / Transaktion aus
        • -
        • ac:off_ta:on - Autocommit aus / Transaktion ein (Autocommit "aus" impliziert Transaktion "ein")
        • -
        - -
      -
    -
    - -
      cacheEvents -
        - attr <device> cacheEvents [2|1|0] -
        -
          -
        • cacheEvents=1: es werden Events für das Reading CacheUsage erzeugt wenn ein Event zum Cache hinzugefügt wurde.
        • -
        • cacheEvents=2: es werden Events für das Reading CacheUsage erzeugt wenn im asynchronen Mode der Schreibzyklus in die - Datenbank beginnt. CacheUsage enthält zu diesem Zeitpunkt die Anzahl der in die Datenbank zu schreibenden - Datensätze.

        • -
        -
      -
    -
    - -
      cacheLimit -
        - - attr <device> cacheLimit <n> -
        - - Im asynchronen Logmodus wird der Cache in die Datenbank weggeschrieben und geleert wenn die Anzahl <n> Datensätze - im Cache erreicht ist (Default: 500). Der Timer des asynchronen Logmodus wird dabei neu auf den Wert des Attributs "syncInterval" - gesetzt. Im Fehlerfall wird ein erneuter Schreibversuch frühestens nach syncInterval/2 gestartet.
        -
      -
    -
    - -
      colEvent -
        - - attr <device> colEvent <n> -
        - - Die Feldlänge für das DB-Feld EVENT wird userspezifisch angepasst. Mit dem Attribut kann der Default-Wert im Modul - verändert werden wenn die Feldlänge in der Datenbank manuell geändert wurde. Mit colEvent=0 wird das Datenbankfeld - EVENT nicht gefüllt.
        - Hinweis:
        - Mit gesetztem Attribut gelten alle Feldlängenbegrenzungen auch für SQLite DB wie im Internal COLUMNS angezeigt !
        -
      -
    -
    - -
      colReading -
        - - attr <device> colReading <n> -
        - - Die Feldlänge für das DB-Feld READING wird userspezifisch angepasst. Mit dem Attribut kann der Default-Wert im Modul - verändert werden wenn die Feldlänge in der Datenbank manuell geändert wurde. Mit colReading=0 wird das Datenbankfeld - READING nicht gefüllt.
        - Hinweis:
        - Mit gesetztem Attribut gelten alle Feldlängenbegrenzungen auch für SQLite DB wie im Internal COLUMNS angezeigt !
        -
      -
    -
    - -
      colValue -
        - - attr <device> colValue <n> -
        - - Die Feldlänge für das DB-Feld VALUE wird userspezifisch angepasst. Mit dem Attribut kann der Default-Wert im Modul - verändert werden wenn die Feldlänge in der Datenbank manuell geändert wurde. Mit colValue=0 wird das Datenbankfeld - VALUE nicht gefüllt.
        - Hinweis:
        - Mit gesetztem Attribut gelten alle Feldlängenbegrenzungen auch für SQLite DB wie im Internal COLUMNS angezeigt !
        -
      -
    -
    - -
      DbLogType -
        - - attr <device> DbLogType [Current|History|Current/History|SampleFill/History] -
        - - Dieses Attribut legt fest, welche Tabelle oder Tabellen in der Datenbank genutzt werden sollen. Ist dieses Attribut nicht gesetzt, wird - per default die Einstellung history verwendet.

        - - Bedeutung der Einstellungen sind:

        - -
          - - - - - - -
          Current Events werden nur in die current-Tabelle geloggt. - Die current-Tabelle wird bei der SVG-Erstellung ausgewertet.
          History Events werden nur in die history-Tabelle geloggt. Es wird keine DropDown-Liste mit Vorschlägen bei der SVG-Erstellung - erzeugt.
          Current/History Events werden sowohl in die current- also auch in die hitory Tabelle geloggt. - Die current-Tabelle wird bei der SVG-Erstellung ausgewertet.
          SampleFill/History Events werden nur in die history-Tabelle geloggt. Die current-Tabelle wird bei der SVG-Erstellung ausgewertet und - kann zur Erzeugung einer DropDown-Liste mittels einem - DbRep-Device
          "set <DbRep-Name> tableCurrentFillup" mit - einem einstellbaren Extract der history-Tabelle gefüllt werden (advanced Feature).
          -
        -
        -
        - - Hinweis:
        - Die Current-Tabelle muß genutzt werden um eine Device:Reading-DropDownliste zur Erstellung eines - SVG-Plots zu erhalten.
        -
      -
    -
    - -
      DbLogSelectionMode -
        - - attr <device> DbLogSelectionMode [Exclude|Include|Exclude/Include] -
        - - Dieses, fuer DbLog-Devices spezifische Attribut beeinflußt, wie die Device-spezifischen Attributes - DbLogExclude und DbLogInclude (s.u.) ausgewertet werden.
        - Fehlt dieses Attribut, wird dafuer "Exclude" als Default angenommen.
        - -
          -
        • Exclude: DbLog verhaelt sich wie bisher auch, alles was ueber die RegExp im DEF angegeben ist, wird geloggt, bis auf das, - was ueber die RegExp in DbLogExclude ausgeschlossen wird.
          - Das Attribut DbLogInclude wird in diesem Fall nicht beruecksichtigt
        • -
        • Include: Es wird nur das geloggt was ueber die RegExp in DbLogInclude (im Quelldevice) eingeschlossen wird.
          - Das Attribut DbLogExclude wird in diesem Fall ebenso wenig beruecksichtigt wie die Regex im DEF. Auch - der Devicename (des Quelldevice) geht in die Auswertung nicht mit ein.
        • -
        • Exclude/Include: Funktioniert im Wesentlichen wie "Exclude", nur das sowohl DbLogExclude als auch DbLogInclude - geprueft werden. Readings die durch DbLogExclude zwar ausgeschlossen wurden, mit DbLogInclude aber wiederum eingeschlossen werden, - werden somit dennoch geloggt.
        • -
        -
      -
    -
    - -
      DbLogInclude -
        - - attr <device> DbLogInclude regex:MinInterval,[regex:MinInterval] ... -
        - - Wenn DbLog genutzt wird, wird in allen Devices das Attribut DbLogInclude propagiert. - DbLogInclude funktioniert im Endeffekt genau wie DbLogExclude, ausser dass eben readings mit diesen RegExp - in das Logging eingeschlossen werden koennen, statt ausgeschlossen. - Siehe dazu auch das DbLog-Device-Spezifische Attribut DbLogSelectionMode, das beeinflußt wie - DbLogExclude und DbLogInclude ausgewertet werden.
        - - Beispiel
        - attr MyDevice1 DbLogInclude .*
        - attr MyDevice2 DbLogInclude state,(floorplantext|MyUserReading):300,battery:3600 -
      -
    -
    - -
      DbLogExclude -
        - - attr <device> DbLogExclude regex:MinInterval,[regex:MinInterval] ... -
        - - Wenn DbLog genutzt wird, wird in alle Devices das Attribut DbLogExclude propagiert. - Der Wert des Attributes wird als Regexp ausgewertet und schliesst die damit matchenden Readings von einem Logging aus. - Einzelne Regexp werden durch Kommata getrennt. Ist MinIntervall angegeben, so wird der Logeintrag nur - dann nicht geloggt, wenn das Intervall noch nicht erreicht und der Wert des Readings sich nicht verändert hat.

        - - Beispiel
        - attr MyDevice1 DbLogExclude .*
        - attr MyDevice2 DbLogExclude state,(floorplantext|MyUserReading):300,battery:3600 -
      -
    -
    - -
      excludeDevs -
        - - attr <device> excludeDevs <devspec1>[#Reading],<devspec2>[#Reading],<devspec...> -
        - - Die Device/Reading-Kombinationen "devspec1#Reading", "devspec2#Reading" bis "devspec..." werden vom Logging in die - Datenbank global ausgeschlossen.
        - Die Angabe eines auszuschließenden Readings ist optional.
        - Somit können Device/Readings explizit bzw. konsequent vom Logging ausgeschlossen werden ohne Berücksichtigung anderer - Excludes oder Includes (z.B. im DEF). - Die auszuschließenden Devices können als Geräte-Spezifikation angegeben werden. - Für weitere Details bezüglich devspec siehe Geräte-Spezifikation.

        - - Beispiel
        - - attr <device> excludeDevs global,Log.*,Cam.*,TYPE=DbLog -
        - # Es werden die Devices global bzw. Devices beginnend mit "Log" oder "Cam" bzw. Devices vom Typ "DbLog" vom Logging ausgeschlossen.
        - - attr <device> excludeDevs .*#.*Wirkleistung.* -
        - # Es werden alle Device/Reading-Kombinationen mit "Wirkleistung" im Reading vom Logging ausgeschlossen.
        - - attr <device> excludeDevs SMA_Energymeter#Bezug_WirkP_Zaehler_Diff -
        - # Es wird der Event mit Device "SMA_Energymeter" und Reading "Bezug_WirkP_Zaehler_Diff" vom Logging ausgeschlossen.
        - -
      -
    -
    - -
      expimpdir -
        - - attr <device> expimpdir <directory> -
        - - In diesem Verzeichnis wird das Cachefile beim Export angelegt bzw. beim Import gesucht. Siehe set-Kommandos - "exportCache" bzw. "importCachefile". Das Default-Verzeichnis ist "(global->modpath)/log/". - Das im Attribut angegebene Verzeichnis muss vorhanden und beschreibbar sein.

        - - Beispiel
        - - attr <device> expimpdir /opt/fhem/cache/ -
        -
      -
    -
    - -
      exportCacheAppend -
        - - attr <device> exportCacheAppend [1|0] -
        - - Wenn gesetzt, wird beim Export des Cache ("set <device> exportCache") der Cacheinhalt an das neueste bereits vorhandene - Exportfile angehängt. Ist noch kein Exportfile vorhanden, wird es neu angelegt.
        - Ist das Attribut nicht gesetzt, wird bei jedem Exportvorgang ein neues Exportfile angelegt. (default)
        -
      -
    -
    - -
      noNotifyDev -
        - - attr <device> noNotifyDev [1|0] -
        - - Erzwingt dass NOTIFYDEV nicht gesetzt und somit nicht verwendet wird.
        -
      -
    -
    - -
      noSupportPK -
        - - attr <device> noSupportPK [1|0] -
        - - Deaktiviert die programmtechnische Unterstützung eines gesetzten Primary Key durch das Modul.
        -
      -
    -
    - -
      shutdownWait -
        - - attr <device> shutdownWait -
        - - FHEM wartet während des shutdowns fuer n Sekunden, um die Datenbank korrekt zu beenden
        -
      -
    -
    - -
      showproctime -
        - attr <device> showproctime [1|0] -
        - - Wenn gesetzt, zeigt das Reading "sql_processing_time" die benötigte Abarbeitungszeit (in Sekunden) für die SQL-Ausführung der - durchgeführten Funktion. Dabei wird nicht ein einzelnes SQL-Statement, sondern die Summe aller notwendigen SQL-Abfragen innerhalb der - jeweiligen Funktion betrachtet. Das Reading "background_processing_time" zeigt die im Kindprozess BlockingCall verbrauchte Zeit.
        - -
      -
    -
    - -
      showNotifyTime -
        - attr <device> showNotifyTime [1|0] -
        - - Wenn gesetzt, zeigt das Reading "notify_processing_time" die benötigte Abarbeitungszeit (in Sekunden) für die - Abarbeitung der DbLog Notify-Funktion. Das Attribut ist für Performance Analysen geeignet und hilft auch die Unterschiede - im Zeitbedarf bei der Umschaltung des synchronen in den asynchronen Modus festzustellen.
        - -
      -
    -
    - -
      syncEvents -
        - attr <device> syncEvents [1|0] -
        - - es werden Events für Reading NextSync erzeugt.
        -
      -
    -
    - -
      syncInterval -
        - attr <device> syncInterval <n> -
        - - Wenn DbLog im asynchronen Modus betrieben wird (Attribut asyncMode=1), wird mit diesem Attribut das Intervall in Sekunden zur Speicherung - der im Speicher gecachten Events in die Datenbank eingestellt. Der Defaultwert ist 30 Sekunden.
        - -
      -
    -
    - -
      suppressAddLogV3 -
        - attr <device> suppressAddLogV3 [1|0] -
        - - Wenn gesetzt werden verbose3-Logeinträge durch die addLog-Funktion unterdrückt.
        -
      -
    -
    - -
      suppressUndef -
        - attr <device> ignoreUndef -
        - Unterdrueckt alle undef Werte die durch eine Get-Anfrage zb. Plot aus der Datenbank selektiert werden
        - - Beispiel
        - #DbLog eMeter:power:::$val=($val>1500)?undef:$val -
      -
    -
    - -
      timeout -
        - - attr <device> timeout -
        - Setzt den Timeout-Wert für den Schreibzyklus in die Datenbank im asynchronen Modus (default 86400s).
        -
      -
    -
    - -
      useCharfilter -
        - - attr <device> useCharfilter [0|1] -
        - wenn gesetzt, werden nur ASCII Zeichen von 32 bis 126 im Event akzeptiert. (default: 0)
        - Das sind die Zeichen " A-Za-z0-9!"#$%&'()*+,-.\/:;<=>?@[\\]^_`{|}~".
        - Umlaute und "€" werden umgesetzt (z.B. ä nach ae, € nach EUR).
        -
      -
    -
    - -
      valueFn -
        - - attr <device> valueFn {} -
        - - Es kann über einen Perl-Ausdruck auf die Variablen $TIMESTAMP, $DEVICE, $DEVICETYPE, $READING, $VALUE (Wert des Readings) und - $UNIT (Einheit des Readingswert) zugegriffen werden und diese verändern, d.h. die veränderten Werte werden geloggt. - Außerdem hat man lesenden Zugriff auf $EVENT für eine Auswertung im Perl-Ausdruck. - Diese Variable kann aber nicht verändert werden.
        - Soll $TIMESTAMP verändert werden, muss die Form "yyyy-mm-dd hh:mm:ss" eingehalten werden, ansonsten wird der - geänderte $timestamp nicht übernommen. - Zusätzlich kann durch Setzen der Variable "$IGNORE=1" ein Datensatz vom Logging ausgeschlossen werden.

        - - Beispiele
        - - attr <device> valueFn {if ($DEVICE eq "living_Clima" && $VALUE eq "off" ){$VALUE=0;} elsif ($DEVICE eq "e-power"){$VALUE= sprintf "%.1f", $VALUE;}} -
        - # ändert den Reading-Wert des Gerätes "living_Clima" von "off" zu "0" und rundet den Wert vom Gerät "e-power"

        - - attr <device> valueFn {if ($DEVICE eq "SMA_Energymeter" && $READING eq "state"){$IGNORE=1;}} -
        - # der Datensatz wird nicht geloggt wenn Device = "SMA_Energymeter" und das Reading = "state" ist

        - - attr <device> valueFn {if ($DEVICE eq "Dum.Energy" && $READING eq "TotalConsumption"){$UNIT="W";}} -
        - # setzt die Einheit des Devices "Dum.Energy" auf "W" wenn das Reading = "TotalConsumption" ist

        -
      -
    -
    - -
      verbose4Devs -
        - - attr <device> verbose4Devs <device1>,<device2>,<device..> -
        - - Mit verbose Level 4 werden nur Ausgaben bezüglich der in diesem Attribut aufgeführten Devices im Logfile protokolliert. Ohne dieses - Attribut werden mit verbose 4 Ausgaben aller relevanten Devices im Logfile protokolliert. - Die angegebenen Devices werden als Regex ausgewertet.
        - - Beispiel
        - - attr <device> verbose4Devs sys.*,.*5000.*,Cam.*,global -
        - # Es werden Devices beginnend mit "sys", "Cam" bzw. Devices die "5000" enthalten und das Device "global" protokolliert falls verbose=4 - eingestellt ist.
        -
      -
    -
    - -
- -=end html_DE - -=cut - - diff --git a/fhem/contrib/DS_Starter/93_DbRep.pm b/fhem/contrib/DS_Starter/93_DbRep.pm new file mode 100644 index 000000000..e7006412c --- /dev/null +++ b/fhem/contrib/DS_Starter/93_DbRep.pm @@ -0,0 +1,13849 @@ +########################################################################################################## +# $Id: 93_DbRep.pm 17451 2018-10-02 14:26:58Z DS_Starter $ +########################################################################################################## +# 93_DbRep.pm +# +# (c) 2016-2018 by Heiko Maaz +# e-mail: Heiko dot Maaz at t-online dot de +# +# This Module can be used to select and report content of databases written by 93_DbLog module +# in different manner. +# +# This script is part of fhem. +# +# Fhem is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 2 of the License, or +# (at your option) any later version. +# +# Fhem is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with fhem. If not, see . +# +# Credits: +# - viegener for some input +# - some proposals to boost and improve SQL-Statements by JoeALLb +# - function reduceLog created by Claudiu Schuster (rapster) was copied from DbLog (Version 3.12.3 08.10.2018) +# and changed to meet the requirements of DbRep +# +########################################################################################################################### +# +# Definition: define DbRep +# +# This module uses credentials of the DbLog-Device +# +########################################################################################################################### +package main; + +use strict; +use warnings; + +# Versions History intern +our %DbRep_vNotesIntern = ( + "8.3.0" => "17.10.2018 reduceLog from DbLog integrated to DbRep, textField-long as default for sqlCmd, both attributes timeOlderThan and timeDiffToNow can be set at same time", + "8.2.3" => "07.10.2018 check availability of DbLog-device at definition time of DbRep-device ", + "8.2.2" => "07.10.2018 DbRep_getMinTs changed, fix don't get the real min timestamp in rare cases ", + "8.2.1" => "07.10.2018 \$hash->{dbloghash}{HELPER}{REOPEN_RUNS_UNTIL} contains time until DB is closed ", + "8.2.0" => "05.10.2018 direct help for attributes ", + "8.1.0" => "02.10.2018 new get versionNotes command ", + "8.0.1" => "20.09.2018 DbRep_getMinTs improved", + "8.0.0" => "11.09.2018 get filesize in DbRep_WriteToDumpFile corrected, restoreMySQL for clientSide dumps, minor fixes ", + "7.20.0" => "04.09.2018 deviceRename can operate a Device name with blank, e.g. 'current balance' as old device name ", + "7.19.0" => "25.08.2018 attribute 'valueFilter' to filter datasets in fetchrows ", + "7.18.2" => "02.08.2018 fix in fetchrow function (forum:#89886), fix highlighting ", + "7.18.1" => "03.06.2018 commandref revised ", + "7.18.0" => "02.06.2018 possible use of y:(\\d) for timeDiffToNow, timeOlderThan , minor fixes of timeOlderThan, delEntries considers executeBeforeDump,executeAfterDump ", + "7.17.3" => "30.04.2017 writeToDB - readingname can be replaced by the value of attribute 'readingNameMap' ", + "7.17.2" => "22.04.2017 fix don't writeToDB if device name contain '.' only, minor fix in DbReadingsVal ", + "7.17.1" => "20.04.2017 fix '§' is deleted by carfilter ", + "7.17.0" => "17.04.2018 new function DbReadingsVal ", + "7.16.0" => "13.04.2018 new function dbValue (blocking) ", + "7.15.2" => "12.04.2018 fix in setting MODEL, prevent fhem from crash if wrong timestamp '0000-00-00' found in db ", + "7.15.1" => "11.04.2018 sqlCmd accept widget textField-long, Internal MODEL is set ", + "7.15.0" => "24.03.2018 new command sqlSpecial ", + "7.14.8" => "21.03.2018 fix no save into database if value=0 (DbRep_OutputWriteToDB) ", + "7.14.7" => "21.03.2018 exportToFile,importFromFile can use file as an argument and executeBeforeDump, executeAfterDump is considered ", + "7.14.6" => "18.03.2018 attribute expimpfile can use some kinds of wildcards (exportToFile, importFromFile adapted) ", + "7.14.5" => "17.03.2018 perl warnings of DbLog \$dn,\$dt,\$evt,\$rd in changeval_Push & complex ", + "7.14.4" => "11.03.2018 increased timeout of BlockingCall in DbRep_firstconnect ", + "7.14.3" => "07.03.2018 DbRep_firstconnect changed - get lowest timestamp in database, DbRep_Connect deleted ", + "7.14.2" => "04.03.2018 fix perl warning ", + "7.14.1" => "01.03.2018 currentfillup_Push bugfix for PostgreSQL ", + "7.14.0" => "26.02.2018 syncStandby ", + "7.13.3" => "25.02.2018 commandref revised (forum:#84953) ", + "7.13.2" => "24.02.2018 DbRep_firstconnect changed, bug fix in DbRep_collaggstr for aggregation = month ", + "7.13.1" => "20.02.2018 commandref revised ", + "7.13.0" => "17.02.2018 changeValue can handle perl code {} as 'new string' ", + "7.12.0" => "16.02.2018 compression of dumpfile, restore of compressed files possible ", + "7.11.0" => "12.02.2018 new command 'repairSQLite' to repair a corrupted SQLite database ", + "7.10.0" => "10.02.2018 bugfix delete attr timeYearPeriod if set other time attributes, new 'changeValue' command ", + "7.9.0" => "09.02.2018 new attribute 'avgTimeWeightMean' (time weight mean calculation), code review of selection routines, maxValue handle negative values correctly, one security second for correct create TimeArray in DbRep_normRelTime ", + "7.8.1" => "04.02.2018 bugfix if IsDisabled (again), code review, bugfix last dataset is not selected if timestamp is fully set ('date time'), fix '\$runtime_string_next' = '\$runtime_string_next.999';' if \$runtime_string_next is part of sql-execute place holder AND contains date+time ", + "7.8.0" => "04.02.2018 new command 'eraseReadings' ", + "7.7.1" => "03.02.2018 minor fix in DbRep_firstconnect if IsDisabled ", + "7.7.0" => "29.01.2018 attribute 'averageCalcForm', calculation sceme 'avgDailyMeanGWS', 'avgArithmeticMean' for averageValue ", + "7.6.1" => "27.01.2018 new attribute 'sqlCmdHistoryLength' and 'fetchMarkDuplicates' for highlighting multiple datasets by fetchrows ", + "7.6.0" => "26.01.2018 events containing '|' possible in fetchrows & delSeqDoublets, fetchrows displays multiple \$k entries with timestamp suffix \$k (as index), sqlCmdHistory (avaiable if sqlCmd was executed) ", + "7.5.5" => "25.01.2018 minor change in delSeqDoublets ", + "7.5.4" => "24.01.2018 delseqdoubl_DoParse reviewed to optimize memory usage, executeBeforeDump executeAfterDump now available for 'delSeqDoublets' ", + "7.5.3" => "23.01.2018 new attribute 'ftpDumpFilesKeep', version management added to FTP-usage ", + "7.5.2" => "23.01.2018 fix typo DumpRowsCurrrent, dumpFilesKeep can be set to '0', commandref revised ", + "7.5.1" => "20.01.2018 DbRep_DumpDone changed to create background_processing_time before execute 'executeAfterProc' Commandref updated ", + "7.5.0" => "16.01.2018 DbRep_OutputWriteToDB, set options display/writeToDB for (max|min|sum|average|diff)Value ", + "7.4.1" => "14.01.2018 fix old dumpfiles not deleted by dumpMySQL clientSide ", + "7.4.0" => "09.01.2018 dumpSQLite/restoreSQLite, backup/restore now available when DbLog-device has reopen xxxx running, executeBeforeDump executeAfterDump also available for optimizeTables, vacuum, restoreMySQL, restoreSQLite, attribute executeBeforeDump / executeAfterDump renamed to executeBeforeProc & executeAfterProc ", + "7.3.1" => "08.01.2018 fix syntax error for perl < 5.20 ", + "7.3.0" => "07.01.2018 DbRep-charfilter avoid control characters in datasets to export, impfile_Push errortext improved, expfile_DoParse changed to use aggregation for split selects in timeslices (avoid heavy memory consumption) ", + "7.2.1" => "04.01.2018 bugfix month out of range that causes fhem crash ", + "7.2.0" => "27.12.2017 new attribute 'seqDoubletsVariance' ", + "7.1.0" => "22.12.2017 new attribute timeYearPeriod for reports correspondig to e.g. electricity billing, bugfix connection check is running after restart allthough dev is disabled ", + "7.0.0" => "18.12.2017 don't set \$runtime_string_first,\$runtime_string_next,\$ts if time/aggregation-attributes not set, change_Push redesigned, new command get blockinginfo, identify if reopen is running on dblog-device and postpone the set-command ", + "6.4.3" => "17.12.2017 bugfix in delSeqDoublets, fetchrows if datasets contain characters like \"' and s.o. ", + "6.4.2" => "15.12.2017 change 'delSeqDoublets' to respect attribute 'limit' (adviceDelete,adviceRemain), commandref revised ", + "6.4.1" => "13.12.2017 new Attribute 'sqlResultFieldSep' for field separate options of sqlCmd result ", + "6.4.0" => "10.12.2017 prepare module for usage of datetime picker widget (Forum:#35736) ", + "6.3.2" => "05.12.2017 make direction of fetchrows switchable ASC <-> DESC by attribute fetchRoute ", + "6.3.1" => "04.12.2017 fix DBD::mysql::st execute failed: Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'DEVELfhem.history.TIMESTAMP' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by Forum:https://forum.fhem.de/index.php/topic,65860.msg725595.html#msg725595 , fix currentfillup_Push PostgreSQL -> use \$runtime_string_next as Timestring during current insert ", + "6.3.0" => "04.12.2017 support addition format d:xx h:xx m:xx s:xx for attributes timeDiffToNow, timeOlderThan ", + "6.2.3" => "04.12.2017 fix localtime(time); (current time deduction) in DbRep_createTimeArray ", + "6.2.2" => "01.12.2017 support all aggregations for delSeqDoublets, better output filesize when mysql dump finished ", + "6.2.1" => "30.11.2017 support delSeqDoublets without device,reading is set and support device-devspec, reading list, minor fixes in delSeqDoublets ", + "6.2.0" => "29.11.2017 enhanced command delSeqDoublets by 'delete' ", + "6.1.0" => "29.11.2017 new command delSeqDoublets (adviceRemain,adviceDelete), add Option to LASTCMD ", + "6.0.0" => "18.11.2017 FTP transfer dumpfile after dump, delete old dumpfiles within Blockingcall (avoid freezes) commandref revised, minor fixes ", + "5.8.6" => "30.10.2017 don't limit attr reading, device if the attr contains a list ", + "5.8.5" => "19.10.2017 filter unwanted characters in 'procinfo'-result ", + "5.8.4" => "17.10.2017 DbRep_createSelectSql, DbRep_createDeleteSql, currentfillup_Push switch to devspec ", + "5.8.3" => "16.10.2017 change to use DbRep_createSelectSql: minValue,diffValue - DbRep_createDeleteSql: delEntries ", + "5.8.2" => "15.10.2017 sub DbRep_createTimeArray ", + "5.8.1" => "15.10.2017 change to use DbRep_createSelectSql: sumValue,averageValue,exportToFile,maxValue ", + "5.8.0" => "15.10.2017 adapt DbRep_createSelectSql for better performance if time/aggregation not set, can set table as flexible argument for countEntries, fetchrows (default: history), minor fixes ", + "5.7.1" => "13.10.2017 tableCurrentFillup fix for PostgreSQL, commandref revised ", + "5.7.0" => "09.10.2017 tableCurrentPurge, tableCurrentFillup ", + "5.6.4" => "05.10.2017 abortFn's adapted to use abortArg (Forum:77472) ", + "5.6.3" => "01.10.2017 fix crash of fhem due to wrong rmday-calculation if month is changed, Forum:#77328 ", + "5.6.2" => "28.08.2017 commandref revised ", + "5.6.1" => "18.07.2017 commandref revised, minor fixes ", + "5.6.0" => "17.07.2017 default timeout changed to 86400, new get-command 'procinfo' (MySQL) ", + "5.5.2" => "16.07.2017 dbmeta_DoParse -> show variables (no global) ", + "5.5.1" => "16.07.2017 wrong text output in state when restoreMySQL was aborted by timeout ", + "5.5.0" => "10.07.2017 replace \$hash->{dbloghash}{DBMODEL} by \$hash->{dbloghash}{MODEL} (DbLog was changed) ", + "5.4.0" => "03.07.2017 restoreMySQL - restore of csv-files (from dumpServerSide), RestoreRowsHistory/ DumpRowsHistory, Commandref revised ", + "5.3.1" => "28.06.2017 vacuum for SQLite added, readings enhanced for optimizeTables / vacuum, commandref revised ", + "5.3.0" => "26.06.2017 change of DbRep_mysqlOptimizeTables, new command optimizeTables ", + "5.2.1" => "25.06.2017 bugfix in sqlCmd_DoParse (PRAGMA, UTF8, SHOW) ", + "5.2.0" => "14.06.2017 UTF-8 support for MySQL (fetchrows, srvinfo, expfile, impfile, insert) ", + "5.1.0" => "13.06.2017 column 'UNIT' added to fetchrow result ", + "5.0.6" => "13.06.2017 add Aria engine to DbRep_mysqlOptimizeTables ", + "5.0.5" => "12.06.2017 bugfixes in DbRep_DumpAborted, some changes in dumpMySQL, optimizeTablesBeforeDump added to mysql_DoDumpServerSide, new reading DumpFileCreatedSize ", + "5.0.4" => "09.06.2017 some improvements and changes of mysql_DoDump, commandref revised, new attributes executeBeforeDump, executeAfterDump ", + "5.0.3" => "07.06.2017 mysql_DoDumpServerSide added ", + "5.0.2" => "06.06.2017 little improvements in mysql_DoDumpClientSide ", + "5.0.1" => "05.06.2017 dependencies between dumpMemlimit and dumpSpeed created, enhanced verbose 5 logging ", + "5.0.0" => "04.06.2017 MySQL Dump nonblocking added ", + "4.16.1" => "22.05.2017 encode json without JSON module, requires at least fhem.pl 14348 2017-05-22 20:25:06Z ", + "4.16.0" => "22.05.2017 format json as option of sqlResultFormat, state will never be deleted in 'DbRep_delread' ", + "4.15.1" => "20.05.2017 correction of commandref ", + "4.15.0" => "17.05.2017 SUM(VALUE),AVG(VALUE) recreated for PostgreSQL, Code reviewed and optimized ", + "4.14.2" => "16.05.2017 SQL-Statements optimized for Wildcard '%' usage if used, Wildcard '_' isn't supported furthermore, \"averageValue\", \"sumValue\", \"maxValue\", \"minValue\", \"countEntries\" performance optimized, commandref revised ", + "4.14.1" => "16.05.2017 limitation of fetchrows result datasets to 1000 by attr limit ", + "4.14.0" => "15.05.2017 UserExitFn added as separate sub (DbRep_userexit) and attr userExitFn defined, new subs ReadingsBulkUpdateTimeState, ReadingsBulkUpdateValue, ReadingsSingleUpdateValue, commandref revised ", + "4.13.7" => "11.05.2017 attribute sqlResultSingleFormat became sqlResultFormat, sqlResultSingle deleted and sqlCmd contains now all format possibilities (separated,mline,sline,table), commandref revised ", + "4.13.6" => "10.05.2017 minor changes ", + "4.13.5" => "09.05.2017 cover dbh prepare in eval to avoid crash (sqlResult_DoParse) ", + "4.13.4" => "09.05.2017 attribute sqlResultSingleFormat: mline sline table, attribute 'allowDeletion' is now also valid for sqlResult, sqlResultSingle and delete command is forced ", + "4.13.3" => "09.05.2017 flexible format of reading SqlResultRow_xxx for proper and sort sequence ", + "4.13.2" => "09.05.2017 sqlResult, sqlResultSingle are able to execute delete, insert, update commands error corrections ", + "4.13.1" => "09.05.2017 change substitution in sqlResult, sqlResult_DoParse ", + "4.13.0" => "09.05.2017 acceptance of viegener change with some corrections (separating lines with ]|[ in Singleline) ", + "4.12.3" => "07.05.2017 New sets sqlSelect execute arbitrary sql command returning each row as single reading (fields separated with |) allowing replacement of timestamp values according to attribute definition --> §timestamp_begin§ etc and sqlSelectSingle for executing an sql command returning a single reading (separating lines with §) ", + "4.12.2" => "17.04.2017 DbRep_checkUsePK changed ", + "4.12.1" => "07.04.2017 get tableinfo changed for MySQL ", + "4.12.0" => "31.03.2017 support of primary key for insert functions ", + "4.11.4" => "29.03.2017 bugfix timestamp in minValue, maxValue if VALUE contains more than one numeric value (like in sysmon) ", + "4.11.3" => "26.03.2017 usage of daylight saving time changed to avoid wrong selection when wintertime switch to summertime, minor bug fixes ", + "4.11.2" => "16.03.2017 bugfix in func dbmeta_DoParse (SQLITE_DB_FILENAME) ", + "4.11.1" => "28.02.2017 commandref completed ", + "4.11.0" => "18.02.2017 added [current|previous]_[month|week|day|hour]_begin and [current|previous]_[month|week|day|hour]_end as options of timestamp ", + "4.10.3" => "01.02.2017 rename reading 'diff-overrun_limit-' to 'diff_overrun_limit_', DbRep_collaggstr day aggregation changed back from 4.7.5 change ", + "4.10.2" => "16.01.2017 bugfix uninitialized value \$renmode if RenameAgent ", + "4.10.1" => "30.11.2016 bugfix importFromFile format problem if UNIT-field wasn't set ", + "4.10.0" => "28.12.2016 del_DoParse changed to use Wildcards, del_ParseDone changed to use readingNameMap ", + "4.9.0" => "23.12.2016 function readingRename added ", + "4.8.6" => "17.12.2016 new bugfix group by-clause due to incompatible changes made in MyQL 5.7.5 (Forum #msg541103) ", + "4.8.5" => "16.12.2016 bugfix group by-clause due to Forum #msg540610 ", + "4.8.4" => "13.12.2016 added 'group by ...,table_schema' to select in dbmeta_DoParse due to Forum #msg539228, commandref adapted, changed 'not_enough_data_in_period' to 'less_data_in_period' ", + "4.8.3" => "12.12.2016 balance diff to next period if value of period is 0 between two periods with values ", + "4.8.2" => "10.12.2016 bugfix negativ diff if balanced ", + "4.8.1" => "10.12.2016 added balance diff to diffValue, a difference between the last value of an old aggregation period to the first value of a new aggregation period will be take over now ", + "4.8.0" => "09.12.2016 diffValue selection chenged to 'between' ", + "4.7.7" => "08.12.2016 code review ", + "4.7.6" => "07.12.2016 DbRep version as internal, check if perl module DBI is installed ", + "4.7.5" => "05.12.2016 DbRep_collaggstr day aggregation changed ", + "4.7.4" => "28.11.2016 sub DbRep_calcount changed due to Forum #msg529312 ", + "4.7.3" => "20.11.2016 new diffValue function made suitable to SQLite ", + "4.7.2" => "20.11.2016 commandref adapted, state = Warnings adapted ", + "4.7.1" => "17.11.2016 changed fieldlength to DbLog new standard, diffValue state Warnings due to several situations and generate readings not_enough_data_in_period, diff-overrun_limit ", + "4.7.0" => "16.11.2016 sub diffValue changed due to Forum #msg520154, attr diffAccept added, diffValue now able to calculate if counter was going to 0 ", + "4.6.0" => "31.10.2016 bugfix calc issue due to daylight saving time end (winter time) ", + "4.5.1" => "18.10.2016 get svrinfo contains SQLite database file size (MB), modified timeout routine ", + "4.5.0" => "17.10.2016 get data of dbstatus, dbvars, tableinfo, svrinfo (database dependend) ", + "4.4.0" => "13.10.2016 get function prepared ", + "4.3.0" => "11.10.2016 Preparation of get metadata ", + "4.2.0" => "10.10.2016 allow SQL-Wildcards (% _) in attr reading & attr device ", + "4.1.3" => "09.10.2016 bugfix delEntries running on SQLite ", + "4.1.2" => "08.10.2016 old device in DEF of connected DbLog device will substitute by renamed device if it is present in DEF ", + "4.1.1" => "06.10.2016 NotifyFn is getting events from global AND own device, set is reduced if ROLE=Agent, english commandref enhanced ", + "4.1.0" => "05.10.2016 DbRep_Attr changed ", + "4.0.0" => "04.10.2016 Internal/Attribute ROLE added, sub DbRep_firstconnect changed NotifyFN activated to start deviceRename if ROLE=Agent ", + "3.13.0" => "03.10.2016 added deviceRename to rename devices in database, new Internal DATABASE ", + "3.12.0" => "02.10.2016 function minValue added ", + "3.11.1" => "30.09.2016 bugfix include first and next day in calculation if Timestamp is exactly 'YYYY-MM-DD 00:00:00' ", + "3.11.0" => "29.09.2016 maxValue calculation moved to background to reduce FHEM-load ", + "3.10.1" => "28.09.2016 sub impFile -> changed \$dbh->{AutoCommit} = 0 to \$dbh->begin_work ", + "3.10.0" => "27.09.2016 diffValue calculation moved to background to reduce FHEM-load, new reading background_processing_time ", + "3.9.1" => "27.09.2016 Internal 'LASTCMD' added ", + "3.9.0" => "26.09.2016 new function importFromFile to import data from file (CSV format) ", + "3.8.0" => "16.09.2016 new attr readingPreventFromDel to prevent readings from deletion when a new operation starts ", + "3.7.3" => "11.09.2016 changed format of diffValue-reading if no value was selected ", + "3.7.2" => "04.09.2016 problem in diffValue fixed if if no value was selected ", + "3.7.1" => "31.08.2016 Reading 'errortext' added, commandref continued, exportToFile changed, diffValue changed to fix wrong timestamp if error occur ", + "3.7.0" => "30.08.2016 exportToFile added exports data to file (CSV format) ", + "3.6.0" => "29.08.2016 plausibility checks of database column character length ", + "3.5.2" => "21.08.2016 fit to new commandref style ", + "3.5.1" => "20.08.2016 commandref continued ", + "3.5.0" => "18.08.2016 new attribute timeOlderThan ", + "3.4.4" => "12.08.2016 current_year_begin, previous_year_begin, current_year_end, previous_year_end added as possible values for timestmp attribute ", + "3.4.3" => "09.08.2016 fields for input using 'insert' changed to 'date,time,value,unit'. Attributes device, reading will be used to complete dataset, now more informations available about faulty datasets in arithmetic operations ", + "3.4.2" => "05.08.2016 commandref complemented, fieldlength used in function 'insert' trimmed to 32 ", + "3.4.1" => "04.08.2016 check of numeric value type in functions maxvalue, diffvalue ", + "3.4.0" => "03.08.2016 function 'insert' added ", + "3.3.3" => "16.07.2016 bugfix of aggregation=week if month start is 01 and month end is 12 AND the last week of december is '01' like in 2014 (checked in version 11804) ", + "3.3.2" => "16.07.2016 readings completed with begin of selection range to ensure valid reading order, also done if readingNameMap is set ", + "3.3.1" => "15.07.2016 function 'diffValue' changed, write '-' if no value ", + "3.3.0" => "12.07.2016 function 'diffValue' added ", + "3.2.1" => "12.07.2016 DbRep_Notify prepared, switched from readingsSingleUpdate to readingsBulkUpdate ", + "3.2.0" => "11.07.2016 handling of db-errors is relocated to blockingcall-subs (checked in version 11785) ", + "3.1.1" => "10.07.2016 state turns to initialized and connected after attr 'disabled' is switched from '1' to '0' ", + "3.1.0" => "09.07.2016 new Attr 'timeDiffToNow' and change subs according to that ", + "3.0.0" => "04.07.2016 no selection if timestamp isn't set and aggregation isn't set with fetchrows, delEntries ", + "2.9.9" => "03.07.2016 english version of commandref completed ", + "2.9.8" => "01.07.2016 changed fetchrows_ParseDone to handle readingvalues with whitespaces correctly ", + "2.9.7" => "30.06.2016 moved {DBLOGDEVICE} to {HELPER}{DBLOGDEVICE} ", + "2.9.6" => "30.06.2016 sql-call changed for countEntries, averageValue, sumValue avoiding problems if no timestamp is set and aggregation is set ", + "2.9.5" => "30.06.2016 format of readingnames changed again (substitute ':' with '-' in time) ", + "2.9.4" => "30.06.2016 change readingmap to readingNameMap, prove of unsupported characters added ", + "2.9.3" => "27.06.2016 format of readingnames changed avoiding some problems after restart and splitting ", + "2.9.2" => "27.06.2016 use Time::Local added, DbRep_firstconnect added ", + "2.9.1" => "26.06.2016 german commandref added ", + "2.9.0" => "25.06.2016 attributes showproctime, timeout added ", + "2.8.1" => "24.06.2016 sql-creation of sumValue, maxValue, fetchrows changed main-routine changed ", + "2.8.0" => "24.06.2016 function averageValue changed to nonblocking function ", + "2.7.1" => "24.06.2016 changed blockingcall routines, changed to unique abort-function ", + "2.7.0" => "23.06.2016 changed function countEntries to nonblocking ", + "2.6.3" => "22.06.2016 abort-routines changed, dbconnect-routines changed ", + "2.6.2" => "21.06.2016 aggregation week corrected ", + "2.6.1" => "20.06.2016 routine maxval_ParseDone corrected ", + "2.6.0" => "31.05.2016 maxValue changed to nonblocking function ", + "2.5.3" => "31.05.2016 function delEntries changed ", + "2.5.2" => "31.05.2016 ping check changed, DbRep_Connect changed ", + "2.5.1" => "30.05.2016 sleep in nb-functions deleted ", + "2.5.0" => "30.05.2016 changed to use own \$dbh with DbLog-credentials, function sumValue, fetchrows ", + "2.4.2" => "29.05.2016 function sumValue changed ", + "2.4.1" => "29.05.2016 function fetchrow changed ", + "2.4.0" => "29.05.2016 changed to nonblocking function for sumValue ", + "2.3.0" => "28.05.2016 changed sumValue to 'prepare' with placeholders ", + "2.2.0" => "27.05.2016 changed fetchrow and delEntries function to 'prepare' with placeholders added nonblocking function for delEntries ", + "2.1.0" => "25.05.2016 codechange ", + "2.0.0" => "24.05.2016 added nonblocking function for fetchrow ", + "1.2.0" => "21.05.2016 function and attribute for delEntries added ", + "1.1.0" => "20.05.2016 change result-format of 'count', move runtime-counter to sub DbRep_collaggstr ", + "1.0.0" => "19.05.2016 Initial" +); + +# Versions History extern: +our %DbRep_vNotesExtern = ( + "8.3.0" => "17.10.2018 reduceLog from DbLog integrated to DbRep, textField-long as default for sqlCmd, both attributes timeOlderThan and timeDiffToNow can be set at same time -> so the selection time between can be calculated dynamically ", + "8.2.2" => "07.10.2018 fix don't get the real min timestamp in rare cases ", + "8.2.0" => "05.10.2018 direct help for attributes ", + "8.1.0" => "01.10.2018 new get versionNotes command ", + "8.0.0" => "11.09.2018 get filesize in DbRep_WriteToDumpFile corrected, restoreMySQL for clientSide dumps, minor fixes ", + "7.20.0" => "04.09.2018 deviceRename can operate a Device name with blank, e.g. 'current balance' as old device name ", + "7.19.0" => "25.08.2018 attribute 'valueFilter' to filter datasets in fetchrows ", + "7.18.2" => "02.08.2018 fix in fetchrow function (forum:#89886), fix highlighting ", + "7.18.0" => "02.06.2018 possible use of y:(\\d) for timeDiffToNow, timeOlderThan , minor fixes of timeOlderThan, delEntries considers executeBeforeDump,executeAfterDump ", + "7.17.3" => "30.04.2017 writeToDB - readingname can be replaced by the value of attribute 'readingNameMap' ", + "7.17.0" => "17.04.2018 new function DbReadingsVal ", + "7.16.0" => "13.04.2018 new function dbValue (blocking) ", + "7.15.2" => "12.04.2018 fix in setting MODEL, prevent fhem from crash if wrong timestamp '0000-00-00' found in db ", + "7.15.1" => "11.04.2018 sqlCmd accept widget textField-long, Internal MODEL is set ", + "7.15.0" => "24.03.2018 new command sqlSpecial ", + "7.14.7" => "21.03.2018 exportToFile,importFromFile can use file as an argument and executeBeforeDump, executeAfterDump is considered ", + "7.14.6" => "18.03.2018 attribute expimpfile can use some kinds of wildcards (exportToFile, importFromFile adapted) ", + "7.14.3" => "07.03.2018 DbRep_firstconnect changed - get lowest timestamp in database, DbRep_Connect deleted ", + "7.14.0" => "26.02.2018 new syncStandby command", + "7.12.0" => "16.02.2018 compression of dumpfile, restore of compressed files possible ", + "7.11.0" => "12.02.2018 new command 'repairSQLite' to repair a corrupted SQLite database ", + "7.10.0" => "10.02.2018 bugfix delete attr timeYearPeriod if set other time attributes, new 'changeValue' command ", + "7.9.0" => "09.02.2018 new attribute 'avgTimeWeightMean' (time weight mean calculation), code review of selection routines, maxValue handle negative values correctly, one security second for correct create TimeArray in DbRep_normRelTime ", + "7.8.1" => "04.02.2018 bugfix if IsDisabled (again), code review, bugfix last dataset is not selected if timestamp is fully set ('date time'), fix '\$runtime_string_next' = '\$runtime_string_next.999';' if \$runtime_string_next is part of sql-execute place holder AND contains date+time ", + "7.8.0" => "04.02.2018 new command 'eraseReadings' ", + "7.7.1" => "03.02.2018 minor fix in DbRep_firstconnect if IsDisabled ", + "7.7.0" => "29.01.2018 attribute 'averageCalcForm', calculation sceme 'avgDailyMeanGWS', 'avgArithmeticMean' for averageValue ", + "7.6.1" => "27.01.2018 new attribute 'sqlCmdHistoryLength' and 'fetchMarkDuplicates' for highlighting multiple datasets by fetchrows ", + "7.5.3" => "23.01.2018 new attribute 'ftpDumpFilesKeep', version management added to FTP-usage ", + "7.4.1" => "14.01.2018 fix old dumpfiles not deleted by dumpMySQL clientSide ", + "7.4.0" => "09.01.2018 dumpSQLite/restoreSQLite, backup/restore now available when DbLog-device has reopen xxxx running, executeBeforeDump executeAfterDump also available for optimizeTables, vacuum, restoreMySQL, restoreSQLite, attribute executeBeforeDump / executeAfterDump renamed to executeBeforeProc & executeAfterProc ", + "7.3.1" => "08.01.2018 fix syntax error for perl < 5.20 ", + "7.1.0" => "22.12.2017 new attribute timeYearPeriod for reports correspondig to e.g. electricity billing, bugfix connection check is running after restart allthough dev is disabled ", + "6.4.1" => "13.12.2017 new Attribute 'sqlResultFieldSep' for field separate options of sqlCmd result ", + "6.4.0" => "10.12.2017 prepare module for usage of datetime picker widget (Forum:#35736) ", + "6.1.0" => "29.11.2017 new command delSeqDoublets (adviceRemain,adviceDelete), add Option to LASTCMD ", + "6.0.0" => "18.11.2017 FTP transfer dumpfile after dump, delete old dumpfiles within Blockingcall (avoid freezes) commandref revised, minor fixes ", + "5.6.4" => "05.10.2017 abortFn's adapted to use abortArg (Forum:77472) ", + "5.6.3" => "01.10.2017 fix crash of fhem due to wrong rmday-calculation if month is changed, Forum:#77328 ", + "5.6.0" => "17.07.2017 default timeout changed to 86400, new get-command 'procinfo' (MySQL) ", + "5.4.0" => "03.07.2017 restoreMySQL - restore of csv-files (from dumpServerSide), RestoreRowsHistory/ DumpRowsHistory, Commandref revised ", + "5.3.1" => "28.06.2017 vacuum for SQLite added, readings enhanced for optimizeTables / vacuum, commandref revised ", + "5.3.0" => "26.06.2017 change of DbRep_mysqlOptimizeTables, new command optimizeTables ", + "5.0.6" => "13.06.2017 add Aria engine to DbRep_mysqlOptimizeTables ", + "5.0.3" => "07.06.2017 mysql_DoDumpServerSide added ", + "5.0.1" => "05.06.2017 dependencies between dumpMemlimit and dumpSpeed created, enhanced verbose 5 logging ", + "5.0.0" => "04.06.2017 MySQL Dump nonblocking added ", + "4.16.1" => "22.05.2017 encode json without JSON module, requires at least fhem.pl 14348 2017-05-22 20:25:06Z ", + "4.14.1" => "16.05.2017 limitation of fetchrows result datasets to 1000 by attr limit ", + "4.14.0" => "15.05.2017 UserExitFn added as separate sub (DbRep_userexit) and attr userExitFn defined, new subs ReadingsBulkUpdateTimeState, ReadingsBulkUpdateValue, ReadingsSingleUpdateValue, commandref revised ", + "4.13.4" => "09.05.2017 attribute sqlResultSingleFormat: mline sline table, attribute 'allowDeletion' is now also valid for sqlResult, sqlResultSingle and delete command is forced ", + "4.13.2" => "09.05.2017 sqlResult, sqlResultSingle are able to execute delete, insert, update commands error corrections ", + "4.12.0" => "31.03.2017 support of primary key for insert functions ", + "4.11.4" => "29.03.2017 bugfix timestamp in minValue, maxValue if VALUE contains more than one numeric value (like in sysmon) ", + "4.11.3" => "26.03.2017 usage of daylight saving time changed to avoid wrong selection when wintertime switch to summertime, minor bug fixes ", + "4.11.2" => "16.03.2017 bugfix in func dbmeta_DoParse (SQLITE_DB_FILENAME) ", + "4.11.0" => "18.02.2017 added [current|previous]_[month|week|day|hour]_begin and [current|previous]_[month|week|day|hour]_end as options of timestamp ", + "4.10.2" => "16.01.2017 bugfix uninitialized value \$renmode if RenameAgent ", + "4.10.1" => "30.11.2016 bugfix importFromFile format problem if UNIT-field wasn't set ", + "4.9.0" => "23.12.2016 function readingRename added ", + "4.8.6" => "17.12.2016 new bugfix group by-clause due to incompatible changes made in MyQL 5.7.5 (Forum #msg541103) ", + "4.8.5" => "16.12.2016 bugfix group by-clause due to Forum #msg540610 ", + "4.7.6" => "07.12.2016 DbRep version as internal, check if perl module DBI is installed ", + "4.7.4" => "28.11.2016 sub DbRep_calcount changed due to Forum #msg529312 ", + "4.7.3" => "20.11.2016 new diffValue function made suitable to SQLite ", + "4.6.0" => "31.10.2016 bugfix calc issue due to daylight saving time end (winter time) ", + "4.5.1" => "18.10.2016 get svrinfo contains SQLite database file size (MB), modified timeout routine ", + "4.2.0" => "10.10.2016 allow SQL-Wildcards (% _) in attr reading & attr device ", + "4.1.3" => "09.10.2016 bugfix delEntries running on SQLite ", + "3.13.0" => "03.10.2016 added deviceRename to rename devices in database, new Internal DATABASE ", + "3.12.0" => "02.10.2016 function minValue added ", + "3.11.1" => "30.09.2016 bugfix include first and next day in calculation if Timestamp is exactly 'YYYY-MM-DD 00:00:00' ", + "3.9.0" => "26.09.2016 new function importFromFile to import data from file (CSV format) ", + "3.8.0" => "16.09.2016 new attr readingPreventFromDel to prevent readings from deletion when a new operation starts ", + "3.7.2" => "04.09.2016 problem in diffValue fixed if if no value was selected ", + "3.7.1" => "31.08.2016 Reading 'errortext' added, commandref continued, exportToFile changed, diffValue changed to fix wrong timestamp if error occur ", + "3.7.0" => "30.08.2016 exportToFile added exports data to file (CSV format) ", + "3.5.0" => "18.08.2016 new attribute timeOlderThan ", + "3.4.4" => "12.08.2016 current_year_begin, previous_year_begin, current_year_end, previous_year_end added as possible values for timestamp attribute ", + "3.4.0" => "03.08.2016 function 'insert' added ", + "3.3.1" => "15.07.2016 function 'diffValue' changed, write '-' if no value ", + "3.3.0" => "12.07.2016 function 'diffValue' added ", + "3.1.1" => "10.07.2016 state turns to initialized and connected after attr 'disabled' is switched from '1' to '0' ", + "3.1.0" => "09.07.2016 new Attr 'timeDiffToNow' and change subs according to that ", + "3.0.0" => "04.07.2016 no selection if timestamp isn't set and aggregation isn't set with fetchrows, delEntries ", + "2.9.8" => "01.07.2016 changed fetchrows_ParseDone to handle readingvalues with whitespaces correctly ", + "2.9.5" => "30.06.2016 format of readingnames changed again (substitute ':' with '-' in time) ", + "2.9.4" => "30.06.2016 change readingmap to readingNameMap, prove of unsupported characters added ", + "2.9.3" => "27.06.2016 format of readingnames changed avoiding some problems after restart and splitting ", + "2.9.0" => "25.06.2016 attributes showproctime, timeout added ", + "2.8.0" => "24.06.2016 function averageValue changed to nonblocking function ", + "2.7.0" => "23.06.2016 changed function countEntries to nonblocking ", + "2.6.2" => "21.06.2016 aggregation week corrected ", + "2.6.1" => "20.06.2016 routine maxval_ParseDone corrected ", + "2.6.0" => "31.05.2016 maxValue changed to nonblocking function ", + "2.4.0" => "29.05.2016 changed to nonblocking function for sumValue ", + "2.0.0" => "24.05.2016 added nonblocking function for fetchrow ", + "1.2.0" => "21.05.2016 function and attribute for delEntries added ", + "1.0.0" => "19.05.2016 Initial" +); + +# Hint Hash +our %DbRep_vHintsExt = ( + "2" => "Rules of german weather service for calculation of average temperatures. ", + "1" => "Some helpful FHEM-Wiki Entries" +); + +use POSIX qw(strftime); +use Time::HiRes qw(gettimeofday tv_interval); +use Scalar::Util qw(looks_like_number); +eval "use DBI;1" or my $DbRepMMDBI = "DBI"; +use DBI::Const::GetInfoType; +use Blocking; +use Color; # colorpicker Widget +use Time::Local; +use Encode qw(encode_utf8); +use IO::Compress::Gzip qw(gzip $GzipError); +use IO::Uncompress::Gunzip qw(gunzip $GunzipError); +# no if $] >= 5.018000, warnings => 'experimental'; +no if $] >= 5.017011, warnings => 'experimental::smartmatch'; + +sub DbRep_Main($$;$); +sub DbLog_cutCol($$$$$$$); # DbLog-Funktion nutzen um Daten auf maximale Länge beschneiden + +my %dbrep_col = ("DEVICE" => 64, + "TYPE" => 64, + "EVENT" => 512, + "READING" => 64, + "VALUE" => 128, + "UNIT" => 32 + ); + +################################################################################### +# DbRep_Initialize +################################################################################### +sub DbRep_Initialize($) { + my ($hash) = @_; + $hash->{DefFn} = "DbRep_Define"; + $hash->{UndefFn} = "DbRep_Undef"; + $hash->{ShutdownFn} = "DbRep_Shutdown"; + $hash->{NotifyFn} = "DbRep_Notify"; + $hash->{SetFn} = "DbRep_Set"; + $hash->{GetFn} = "DbRep_Get"; + $hash->{AttrFn} = "DbRep_Attr"; + $hash->{FW_deviceOverview} = 1; + + $hash->{AttrList} = "disable:1,0 ". + "reading ". + "allowDeletion:1,0 ". + "averageCalcForm:avgArithmeticMean,avgDailyMeanGWS,avgTimeWeightMean ". + "device " . + "dumpComment ". + "dumpCompress:1,0 ". + "dumpDirLocal ". + "dumpDirRemote ". + "dumpMemlimit ". + "dumpSpeed ". + "dumpFilesKeep:0,1,2,3,4,5,6,7,8,9,10 ". + "executeBeforeProc ". + "executeAfterProc ". + "expimpfile ". + "fetchRoute:ascent,descent ". + "fetchMarkDuplicates:red,blue,brown,green,orange ". + "ftpDebug:1,0 ". + "ftpDir ". + "ftpDumpFilesKeep:1,2,3,4,5,6,7,8,9,10 ". + "ftpPassive:1,0 ". + "ftpPwd ". + "ftpPort ". + "ftpServer ". + "ftpTimeout ". + "ftpUse:1,0 ". + "ftpUser ". + "ftpUseSSL:1,0 ". + "aggregation:hour,day,week,month,no ". + "diffAccept ". + "limit ". + "optimizeTablesBeforeDump:1,0 ". + "readingNameMap ". + "readingPreventFromDel ". + "role:Client,Agent ". + "seqDoubletsVariance ". + "showproctime:1,0 ". + "showSvrInfo ". + "showVariables ". + "showStatus ". + "showTableInfo ". + "sqlCmdHistoryLength:0,5,10,15,20,25,30,35,40,45,50 ". + "sqlResultFormat:separated,mline,sline,table,json ". + "sqlResultFieldSep:|,:,\/ ". + "timeYearPeriod ". + "timestamp_begin ". + "timestamp_end ". + "timeDiffToNow ". + "timeOlderThan ". + "timeout ". + "userExitFn ". + "valueFilter ". + $readingFnAttributes; + + # Umbenennen von existierenden Attrbuten + # $hash->{AttrRenameMap} = { "reading" => "readingFilter", + # "device" => "deviceFilter", + # }; + +return undef; +} + +################################################################################### +# DbRep_Define +################################################################################### +sub DbRep_Define($@) { + # define DbRep + # ($hash) [1] [2] + # + my ($hash, $def) = @_; + my $name = $hash->{NAME}; + + return "Error: Perl module ".$DbRepMMDBI." is missing. Install it on Debian with: sudo apt-get install libdbi-perl" if($DbRepMMDBI); + + my @a = split("[ \t][ \t]*", $def); + + if(!$a[2]) { + return "You need to specify more parameters.\n". "Format: define DbRep "; + } elsif (!$defs{$a[2]}) { + return "The specified DbLog-Device \"$a[2]\" doesn't exist."; + } + + $hash->{LASTCMD} = " "; + $hash->{ROLE} = AttrVal($name, "role", "Client"); + $hash->{MODEL} = $hash->{ROLE}; + $hash->{HELPER}{DBLOGDEVICE} = $a[2]; + $hash->{VERSION} = (reverse sort(keys %DbRep_vNotesIntern))[0]; + $hash->{NOTIFYDEV} = "global,".$name; # nur Events dieser Devices an DbRep_Notify weiterleiten + my $dbconn = $defs{$a[2]}{dbconn}; + $hash->{DATABASE} = (split(/;|=/, $dbconn))[1]; + $hash->{UTF8} = defined($defs{$a[2]}{UTF8})?$defs{$a[2]}{UTF8}:0; + + my ($err,$hl) = DbRep_getCmdFile($name."_sqlCmdList"); + if(!$err) { + $hash->{HELPER}{SQLHIST} = $hl; + Log3 ($name, 4, "DbRep $name - history sql commandlist read from file ".$attr{global}{modpath}."/FHEM/FhemUtils/cacheDbRep"); + } + + RemoveInternalTimer($hash); + InternalTimer(gettimeofday()+int(rand(45)), 'DbRep_firstconnect', $hash, 0); + + Log3 ($name, 4, "DbRep $name - initialized"); + ReadingsSingleUpdateValue ($hash, 'state', 'initialized', 1); + +return undef; +} + +################################################################################### +# DbRep_Set +################################################################################### +sub DbRep_Set($@) { + my ($hash, @a) = @_; + return "\"set X\" needs at least an argument" if ( @a < 2 ); + my $name = $a[0]; + my $opt = $a[1]; + my $prop = $a[2]; + my $dbh = $hash->{DBH}; + my $dblogdevice = $hash->{HELPER}{DBLOGDEVICE}; + $hash->{dbloghash} = $defs{$dblogdevice}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my $dbname = $hash->{DATABASE}; + my $sd =""; + + my (@bkps,$dir); + $dir = AttrVal($name, "dumpDirLocal", "./"); # 'dumpDirRemote' (Backup-Verz. auf dem MySQL-Server) muß gemountet sein und in 'dumpDirLocal' eingetragen sein + $dir = $dir."/" unless($dir =~ m/\/$/); + + opendir(DIR,$dir); + if ($dbmodel =~ /MYSQL/) { + $dbname = $hash->{DATABASE}; + $sd = $dbname.".*(csv|sql)"; + } elsif ($dbmodel =~ /SQLITE/) { + $dbname = $hash->{DATABASE}; + $dbname = (split /[\/]/, $dbname)[-1]; + $dbname = (split /\./, $dbname)[0]; + $sd = $dbname."_.*.sqlitebkp"; + } + while (my $file = readdir(DIR)) { + next unless (-f "$dir/$file"); + next unless ($file =~ /^$sd/); + push @bkps,$file; + } + closedir(DIR); + my $cj = @bkps?join(",",reverse(sort @bkps)):" "; + + # Drop-Down Liste bisherige Befehle in "sqlCmd" erstellen + my $hl = $hash->{HELPER}{SQLHIST}.",___purge_historylist___" if($hash->{HELPER}{SQLHIST}); + + my $setlist = "Unknown argument $opt, choose one of ". + "eraseReadings:noArg ". + (($hash->{ROLE} ne "Agent")?"sumValue:display,writeToDB ":""). + (($hash->{ROLE} ne "Agent")?"averageValue:display,writeToDB ":""). + (($hash->{ROLE} ne "Agent")?"changeValue ":""). + (($hash->{ROLE} ne "Agent")?"delEntries:noArg ":""). + (($hash->{ROLE} ne "Agent")?"delSeqDoublets:adviceRemain,adviceDelete,delete ":""). + "deviceRename ". + (($hash->{ROLE} ne "Agent")?"readingRename ":""). + (($hash->{ROLE} ne "Agent")?"exportToFile ":""). + (($hash->{ROLE} ne "Agent")?"importFromFile ":""). + (($hash->{ROLE} ne "Agent")?"maxValue:display,writeToDB ":""). + (($hash->{ROLE} ne "Agent")?"minValue:display,writeToDB ":""). + (($hash->{ROLE} ne "Agent")?"fetchrows:history,current ":""). + (($hash->{ROLE} ne "Agent")?"diffValue:display,writeToDB ":""). + (($hash->{ROLE} ne "Agent")?"insert ":""). + (($hash->{ROLE} ne "Agent")?"reduceLog ":""). + (($hash->{ROLE} ne "Agent")?"sqlCmd:textField-long ":""). + (($hash->{ROLE} ne "Agent" && $hl)?"sqlCmdHistory:".$hl." ":""). + (($hash->{ROLE} ne "Agent")?"sqlSpecial:50mostFreqLogsLast2days,allDevCount,allDevReadCount ":""). + (($hash->{ROLE} ne "Agent")?"syncStandby ":""). + (($hash->{ROLE} ne "Agent")?"tableCurrentFillup:noArg ":""). + (($hash->{ROLE} ne "Agent")?"tableCurrentPurge:noArg ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /MYSQL/ )?"dumpMySQL:clientSide,serverSide ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /SQLITE/ )?"dumpSQLite:noArg ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /SQLITE/ )?"repairSQLite ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /MYSQL/ )?"optimizeTables:noArg ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /SQLITE|POSTGRESQL/ )?"vacuum:noArg ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /MYSQL/)?"restoreMySQL:".$cj." ":""). + (($hash->{ROLE} ne "Agent" && $dbmodel =~ /SQLITE/)?"restoreSQLite:".$cj." ":""). + (($hash->{ROLE} ne "Agent")?"countEntries:history,current ":""); + + return if(IsDisabled($name)); + + if ($opt =~ /eraseReadings/) { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + # Readings löschen die nicht in der Ausnahmeliste (Attr readingPreventFromDel) stehen + DbRep_delread($hash); + return undef; + } + + if ($opt eq "dumpMySQL" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + if ($prop eq "serverSide") { + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### New database serverSide dump ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + } else { + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### New database clientSide dump ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + } + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "dump"); + DbRep_Main($hash,$opt,$prop); + return undef; + } + + if ($opt eq "dumpSQLite" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### New SQLite dump ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "dump"); + DbRep_Main($hash,$opt,$prop); + return undef; + } + + if ($opt eq "repairSQLite" && $hash->{ROLE} ne "Agent") { + $prop = $prop?$prop:36000; + if($prop) { + unless($prop =~ /^(\d+)$/) { return " The Value of $opt is not valid. Use only figures 0-9 without decimal places !";}; + # unless ($aVal =~ /^[0-9]+$/) { return " The Value of $aName is not valid. Use only figures 0-9 without decimal places !";} + } + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### New SQLite repair attempt ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - start repair attempt of database ".$hash->{DATABASE}); + # closetime Datenbank + my $dbloghash = $hash->{dbloghash}; + my $dbl = $dbloghash->{NAME}; + CommandSet(undef,"$dbl reopen $prop"); + + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "repair"); + DbRep_Main($hash,$opt); + return undef; + } + + if ($opt =~ /restoreMySQL|restoreSQLite/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### New database Restore/Recovery ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "restore"); + DbRep_Main($hash,$opt,$prop); + return undef; + } + + if ($opt =~ /optimizeTables|vacuum/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### New optimize table / vacuum execution ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "optimize"); + DbRep_Main($hash,$opt); + return undef; + } + + if ($opt =~ m/delSeqDoublets/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + if ($prop =~ /delete/ && !AttrVal($hash->{NAME}, "allowDeletion", 0)) { + return " Set attribute 'allowDeletion' if you want to allow deletion of any database entries. Use it with care !"; + } + DbRep_beforeproc($hash, "delSeq"); + DbRep_Main($hash,$opt,$prop); + return undef; + } + + if ($opt =~ m/reduceLog/ && $hash->{ROLE} ne "Agent") { + if ($hash->{HELPER}{RUNNING_REDUCELOG} && $hash->{HELPER}{RUNNING_REDUCELOG}{pid} !~ m/DEAD/) { + return "reduceLog already in progress. Please wait for the current process to finish."; + } else { + delete $hash->{HELPER}{RUNNING_REDUCELOG}; + my @b = @a; + shift(@b); + $hash->{LASTCMD} = join(" ",@b); + $hash->{HELPER}{REDUCELOG} = \@a; + Log3 ($name, 3, "DbRep $name - ################################################################"); + Log3 ($name, 3, "DbRep $name - ### new reduceLog run ###"); + Log3 ($name, 3, "DbRep $name - ################################################################"); + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "reduceLog"); + DbRep_Main($hash,$opt); + return undef; + } + } + + if ($hash->{HELPER}{RUNNING_BACKUP_CLIENT}) { + $setlist = "Unknown argument $opt, choose one of ". + (($hash->{ROLE} ne "Agent")?"cancelDump:noArg ":""); + } + + if ($hash->{HELPER}{RUNNING_REPAIR}) { + $setlist = "Unknown argument $opt, choose one of ". + (($hash->{ROLE} ne "Agent")?"cancelRepair:noArg ":""); + } + + if ($hash->{HELPER}{RUNNING_RESTORE}) { + $setlist = "Unknown argument $opt, choose one of ". + (($hash->{ROLE} ne "Agent")?"cancelRestore:noArg ":""); + } + + if ($opt eq "cancelDump" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + BlockingKill($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + Log3 ($name, 3, "DbRep $name -> running Dump has been canceled"); + ReadingsSingleUpdateValue ($hash, "state", "Dump canceled", 1); + return undef; + } + + if ($opt eq "cancelRepair" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + BlockingKill($hash->{HELPER}{RUNNING_REPAIR}); + Log3 ($name, 3, "DbRep $name -> running Repair has been canceled"); + ReadingsSingleUpdateValue ($hash, "state", "Repair canceled", 1); + return undef; + } + + if ($opt eq "cancelRestore" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + BlockingKill($hash->{HELPER}{RUNNING_RESTORE}); + Log3 ($name, 3, "DbRep $name -> running Restore has been canceled"); + ReadingsSingleUpdateValue ($hash, "state", "Restore canceled", 1); + return undef; + } + + ####################################################################################################### + ## keine Aktionen außer die über diesem Eintrag solange Reopen xxxx im DbLog-Device läuft + ####################################################################################################### + if ($hash->{dbloghash}{HELPER}{REOPEN_RUNS} && $opt !~ /\?/) { + my $ro = $hash->{dbloghash}{HELPER}{REOPEN_RUNS_UNTIL}; + Log3 ($name, 3, "DbRep $name - connection $dblogdevice to db $dbname is closed until $ro - $opt postponed"); + ReadingsSingleUpdateValue ($hash, "state", "connection $dblogdevice to $dbname is closed until $ro - $opt postponed", 1); + return; + } + ####################################################################################################### + + if ($opt =~ /countEntries/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + my $table = $prop?$prop:"history"; + DbRep_Main($hash,$opt,$table); + + } elsif ($opt =~ /fetchrows/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + my $table = $prop?$prop:"history"; + DbRep_Main($hash,$opt,$table); + + } elsif ($opt =~ m/(max|min|sum|average|diff)Value/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + if (!AttrVal($hash->{NAME}, "reading", "")) { + return " The attribute reading to analyze is not set !"; + } + if ($prop && $prop =~ /writeToDB/) { + if (!AttrVal($hash->{NAME}, "device", "") || AttrVal($hash->{NAME}, "device", "") =~ /[%*:=,]/ || AttrVal($hash->{NAME}, "reading", "") =~ /[,\s]/) { + return "If you want write results back to database, attributes \"device\" and \"reading\" must be set.
+ In that case \"device\" mustn't be a devspec and mustn't contain SQL-Wildcard (%).
+ The \"reading\" to evaluate has to be a single reading and no list."; + } + } + DbRep_Main($hash,$opt,$prop); + + } elsif ($opt =~ m/delEntries|tableCurrentPurge/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + if (!AttrVal($hash->{NAME}, "allowDeletion", undef)) { + return " Set attribute 'allowDeletion' if you want to allow deletion of any database entries. Use it with care !"; + } + DbRep_beforeproc($hash, "delEntries"); + DbRep_Main($hash,$opt); + + } elsif ($opt =~ m/tableCurrentFillup/ && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + DbRep_Main($hash,$opt); + + } elsif ($opt eq "deviceRename") { + shift @a; + shift @a; + $prop = join(" ",@a); # Device Name kann Leerzeichen enthalten + Log3 ($name, 1, "DbRep $name - a: @a"); + my ($olddev, $newdev) = split(",",$prop); + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + if (!$olddev || !$newdev) {return "Both entries \"old device name\", \"new device name\" are needed. Use \"set $name deviceRename olddevname,newdevname\" ";} + $hash->{HELPER}{OLDDEV} = $olddev; + $hash->{HELPER}{NEWDEV} = $newdev; + $hash->{HELPER}{RENMODE} = "devren"; + DbRep_Main($hash,$opt); + + } elsif ($opt eq "readingRename") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + my ($oldread, $newread) = split(",",$prop); + if (!$oldread || !$newread) {return "Both entries \"old reading name\", \"new reading name\" are needed. Use \"set $name readingRename oldreadingname,newreadingname\" ";} + $hash->{HELPER}{OLDREAD} = $oldread; + $hash->{HELPER}{NEWREAD} = $newread; + $hash->{HELPER}{RENMODE} = "readren"; + DbRep_Main($hash,$opt); + + } elsif ($opt eq "insert" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + if ($prop) { + if (!AttrVal($hash->{NAME}, "device", "") || !AttrVal($hash->{NAME}, "reading", "") ) { + return "One or both of attributes \"device\", \"reading\" are not set. It's mandatory to set both to complete dataset for manual insert !"; + } + + # Attribute device & reading dürfen kein SQL-Wildcard % enthalten + return "One or both of attributes \"device\", \"reading\" containing SQL wildcard \"%\". Wildcards are not allowed in function manual insert !" + if(AttrVal($hash->{NAME},"device","") =~ m/%/ || AttrVal($hash->{NAME},"reading","") =~ m/%/ ); + + my ($i_date, $i_time, $i_value, $i_unit) = split(",",$prop); + + if (!$i_date || !$i_time || !$i_value) {return "At least data for \"Date\", \"Time\" and \"Value\" is needed to insert. \"Unit\" is optional. Inputformat is 'YYYY-MM-DD,HH:MM:SS,,' ";} + + unless ($i_date =~ /(\d{4})-(\d{2})-(\d{2})/) {return "Input for date is not valid. Use format YYYY-MM-DD !";} + unless ($i_time =~ /(\d{2}):(\d{2}):(\d{2})/) {return "Input for time is not valid. Use format HH:MM:SS !";} + my $i_timestamp = $i_date." ".$i_time; + my ($yyyy, $mm, $dd, $hh, $min, $sec) = ($i_timestamp =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + + eval { my $ts = timelocal($sec, $min, $hh, $dd, $mm-1, $yyyy-1900); }; + + if ($@) { + my @l = split (/at/, $@); + return " Timestamp is out of range - $l[0]"; + } + + my $i_device = AttrVal($hash->{NAME}, "device", ""); + my $i_reading = AttrVal($hash->{NAME}, "reading", ""); + + # Daten auf maximale Länge (entsprechend der Feldlänge in DbLog DB create-scripts) beschneiden wenn nicht SQLite + if ($dbmodel ne 'SQLITE') { + $i_device = substr($i_device,0, $dbrep_col{DEVICE}); + $i_reading = substr($i_reading,0, $dbrep_col{READING}); + $i_value = substr($i_value,0, $dbrep_col{VALUE}); + $i_unit = substr($i_unit,0, $dbrep_col{UNIT}) if($i_unit); + } + + $hash->{HELPER}{I_TIMESTAMP} = $i_timestamp; + $hash->{HELPER}{I_DEVICE} = $i_device; + $hash->{HELPER}{I_READING} = $i_reading; + $hash->{HELPER}{I_VALUE} = $i_value; + $hash->{HELPER}{I_UNIT} = $i_unit; + $hash->{HELPER}{I_TYPE} = my $i_type = "manual"; + $hash->{HELPER}{I_EVENT} = my $i_event = "manual"; + + } else { + return "Data to insert to table 'history' are needed like this pattern: 'Date,Time,Value,[Unit]'. \"Unit\" is optional. Spaces are not allowed !"; + } + DbRep_Main($hash,$opt); + + } elsif ($opt eq "exportToFile" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + my $f = $prop if($prop); + if (!AttrVal($hash->{NAME}, "expimpfile", "") && !$f) { + return "\"$opt\" needs a file as an argument or the attribute \"expimpfile\" (path and filename) to be set !"; + } + DbRep_Main($hash,$opt,$f); + + } elsif ($opt eq "importFromFile" && $hash->{ROLE} ne "Agent") { + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + my $f = $prop if($prop); + if (!AttrVal($hash->{NAME}, "expimpfile", "") && !$f) { + return "\"$opt\" needs a file as an argument or the attribute \"expimpfile\" (path and filename) to be set !"; + } + DbRep_Main($hash,$opt,$f); + + } elsif ($opt =~ /sqlCmd|sqlSpecial|sqlCmdHistory/) { + return "\"set $opt\" needs at least an argument" if ( @a < 3 ); + # remove arg 0, 1 to get SQL command + my $sqlcmd; + if($opt eq "sqlSpecial") { + $sqlcmd = $prop; + } + if($opt eq "sqlCmd") { + my @cmd = @a; + shift @cmd; shift @cmd; + $sqlcmd = join(" ", @cmd); + $sqlcmd =~ tr/ A-Za-z0-9!"#$§%&'()*+,-.\/:;<=>?@[\\]^_`{|}~äöüÄÖÜ߀/ /cs; + } + if($opt eq "sqlCmdHistory") { + $prop =~ tr/ A-Za-z0-9!"#$%&'()*+,-.\/:;<=>?@[\\]^_`{|}~äöüÄÖÜ߀/ /cs; + $prop =~ s//,/g; + $sqlcmd = $prop; + if($sqlcmd eq "___purge_historylist___") { + delete($hash->{HELPER}{SQLHIST}); + DbRep_setCmdFile($name."_sqlCmdList","",$hash); # Löschen der sql History Liste im DbRep-Keyfile + return "SQL command historylist of $name deleted."; + } + } + $hash->{LASTCMD} = $sqlcmd?"$opt $sqlcmd":"$opt"; + if ($sqlcmd =~ m/^\s*delete/is && !AttrVal($hash->{NAME}, "allowDeletion", undef)) { + return "Attribute 'allowDeletion = 1' is needed for command '$sqlcmd'. Use it with care !"; + } + DbRep_Main($hash,$opt,$sqlcmd); + + } elsif ($opt =~ /changeValue/) { + shift @a; + shift @a; + $prop = join(" ", @a); + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + unless($prop =~ m/^\s*(".*",".*")\s*$/) {return "Both entries \"old string\", \"new string\" are needed. Use \"set $name changeValue \"old string\",\"new string\" (use quotes)";} + my $complex = 0; + my ($oldval,$newval) = ($prop =~ /^\s*"(.*?)","(.*?)"\s*$/); + + if($newval =~ m/[{}]/) { + if($newval =~ m/^\s*(\{.*\})\s*$/s) { + $newval = $1; + $complex = 1; + my %specials = ( + "%VALUE" => $name, + "%UNIT" => $name, + ); + $newval = EvalSpecials($newval, %specials); + } else { + return "The expression of \"new string\" has to be included in \"{ }\" "; + } + } + $hash->{HELPER}{COMPLEX} = $complex; + $hash->{HELPER}{OLDVAL} = $oldval; + $hash->{HELPER}{NEWVAL} = $newval; + $hash->{HELPER}{RENMODE} = "changeval"; + DbRep_beforeproc($hash, "changeval"); + DbRep_Main($hash,$opt); + + } elsif ($opt =~ m/syncStandby/ && $hash->{ROLE} ne "Agent") { + unless($prop) {return "A DbLog-device (standby) is needed to sync. Use \"set $name syncStandby \" ";} + if(!exists($defs{$prop}) || $defs{$prop}->{TYPE} ne "DbLog") { + return "The device \"$prop\" doesn't exist or is not a DbLog-device. "; + } + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + DbRep_Main($hash,$opt,$prop); + + } else { + return "$setlist"; + } + +return undef; +} + +################################################################################### +# DbRep_Get +################################################################################### +sub DbRep_Get($@) { + my ($hash, @a) = @_; + return "\"get X\" needs at least an argument" if ( @a < 2 ); + my $name = $a[0]; + my $opt = $a[1]; + my $prop = $a[2]; + my $dbh = $hash->{DBH}; + my $dblogdevice = $hash->{HELPER}{DBLOGDEVICE}; + $hash->{dbloghash} = $defs{$dblogdevice}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my $dbname = $hash->{DATABASE}; + my $to = AttrVal($name, "timeout", "86400"); + + my $getlist = "Unknown argument $opt, choose one of ". + "svrinfo:noArg ". + "blockinginfo:noArg ". + "minTimestamp:noArg ". + "dbValue ". + (($dbmodel eq "MYSQL")?"dbstatus:noArg ":""). + (($dbmodel eq "MYSQL")?"tableinfo:noArg ":""). + (($dbmodel eq "MYSQL")?"procinfo:noArg ":""). + (($dbmodel eq "MYSQL")?"dbvars:noArg ":""). + "versionNotes:noArg " + ; + + return if(IsDisabled($name)); + + if ($hash->{dbloghash}{HELPER}{REOPEN_RUNS} && $opt !~ /\?|procinfo|blockinginfo/) { + my $ro = $hash->{dbloghash}{HELPER}{REOPEN_RUNS_UNTIL}; + Log3 ($name, 3, "DbRep $name - connection $dblogdevice to db $dbname is closed until $ro - $opt postponed"); + ReadingsSingleUpdateValue ($hash, "state", "connection $dblogdevice to $dbname is closed until $ro - $opt postponed", 1); + return; + } + + if ($opt =~ /dbvars|dbstatus|tableinfo|procinfo/) { + return "Dump is running - try again later !" if($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + return "The operation \"$opt\" isn't available with database type $dbmodel" if ($dbmodel ne 'MYSQL'); + ReadingsSingleUpdateValue ($hash, "state", "running", 1); + DbRep_delread($hash); # Readings löschen die nicht in der Ausnahmeliste (Attr readingPreventFromDel) stehen + $hash->{HELPER}{RUNNING_PID} = BlockingCall("dbmeta_DoParse", "$name|$opt", "dbmeta_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "svrinfo") { + return "Dump is running - try again later !" if($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + DbRep_delread($hash); + ReadingsSingleUpdateValue ($hash, "state", "running", 1); + $hash->{HELPER}{RUNNING_PID} = BlockingCall("dbmeta_DoParse", "$name|$opt", "dbmeta_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "blockinginfo") { + return "Dump is running - try again later !" if($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + DbRep_delread($hash); + ReadingsSingleUpdateValue ($hash, "state", "running", 1); + DbRep_getblockinginfo($hash); + + } elsif ($opt eq "minTimestamp") { + return "Dump is running - try again later !" if($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + $hash->{LASTCMD} = $prop?"$opt $prop":"$opt"; + DbRep_delread($hash); + ReadingsSingleUpdateValue ($hash, "state", "running", 1); + DbRep_firstconnect($hash); + + } elsif ($opt =~ /dbValue/) { + return "get \"$opt\" needs at least an argument" if ( @a < 3 ); + # remove arg 0, 1 to get SQL command + my @cmd = @a; + shift @cmd; shift @cmd; + my $sqlcmd = join(" ",@cmd); + $sqlcmd =~ tr/ A-Za-z0-9!"#$§%&'()*+,-.\/:;<=>?@[\\]^_`{|}~äöüÄÖÜ߀/ /cs; + $hash->{LASTCMD} = $sqlcmd?"$opt $sqlcmd":"$opt"; + if ($sqlcmd =~ m/^\s*delete/is && !AttrVal($hash->{NAME}, "allowDeletion", undef)) { + return "Attribute 'allowDeletion = 1' is needed for command '$sqlcmd'. Use it with care !"; + } + my ($err,$ret) = DbRep_dbValue($name,$sqlcmd); + return $err?$err:$ret; + + } elsif ($opt =~ /versionNotes/) { + my $header = "Module release information table
"; + my $header1 = "Helpful hints
"; + + # Ausgabetabelle erstellen + my ($ret,$val0,$val1); + my $i = 0; + + $ret = ""; + + $ret .= sprintf("
$header1
"); + $ret .= ""; + $ret .= ""; + $ret .= ""; + $i = 0; + foreach my $key (reverse sort(keys %DbRep_vHintsExt)) { + $val0 = $DbRep_vHintsExt{$key}; + $ret .= sprintf("" ); + $ret .= ""; + $i++; + if ($i & 1) { + # $i ist ungerade + $ret .= ""; + } else { + $ret .= ""; + } + } + $ret .= ""; + $ret .= ""; + $ret .= "
$key $val0
"; + $ret .= "
"; + + $ret .= sprintf("
$header
"); + $ret .= ""; + $ret .= ""; + $ret .= ""; + $i = 0; + foreach my $key (reverse sort(keys %DbRep_vNotesExtern)) { + ($val0,$val1) = split(/\s/,$DbRep_vNotesExtern{$key},2); + $ret .= sprintf("" ); + $ret .= ""; + $i++; + if ($i & 1) { + # $i ist ungerade + $ret .= ""; + } else { + $ret .= ""; + } + } + $ret .= ""; + $ret .= ""; + $ret .= "
$key $val0 $val1
"; + $ret .= "
"; + + $ret .= ""; + + return $ret; + + } else { + return "$getlist"; + } + +return undef; +} + +################################################################################### +# DbRep_Attr +################################################################################### +sub DbRep_Attr($$$$) { + my ($cmd,$name,$aName,$aVal) = @_; + my $hash = $defs{$name}; + $hash->{dbloghash} = $defs{$hash->{HELPER}{DBLOGDEVICE}}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my $do; + + # $cmd can be "del" or "set" + # $name is device name + # aName and aVal are Attribute name and value + + # nicht erlaubte / nicht setzbare Attribute wenn role = Agent + my @agentnoattr = qw(aggregation + allowDeletion + dumpDirLocal + reading + readingNameMap + readingPreventFromDel + device + diffAccept + executeBeforeProc + executeAfterProc + expimpfile + ftpUse + ftpUser + ftpUseSSL + ftpDebug + ftpDir + ftpPassive + ftpPort + ftpPwd + ftpServer + ftpTimeout + dumpMemlimit + dumpComment + dumpSpeed + optimizeTablesBeforeDump + seqDoubletsVariance + sqlCmdHistoryLength + timeYearPeriod + timestamp_begin + timestamp_end + timeDiffToNow + timeOlderThan + sqlResultFormat + ); + + if ($aName eq "disable") { + if($cmd eq "set") { + $do = ($aVal) ? 1 : 0; + } + $do = 0 if($cmd eq "del"); + my $val = ($do == 1 ? "disabled" : "initialized"); + ReadingsSingleUpdateValue ($hash, "state", $val, 1); + if ($do == 0) { + RemoveInternalTimer($hash); + InternalTimer(time+5, 'DbRep_firstconnect', $hash, 0); + } else { + my $dbh = $hash->{DBH}; + $dbh->disconnect() if($dbh); + } + } + + if ($cmd eq "set" && $hash->{ROLE} eq "Agent") { + foreach (@agentnoattr) { + return ("Attribute $aName is not usable due to role of $name is \"$hash->{ROLE}\" ") if ($_ eq $aName); + } + } + + if ($aName eq "readingPreventFromDel") { + if($cmd eq "set") { + if($aVal =~ / /) {return "Usage of $aName is wrong. Use a comma separated list of readings which are should prevent from deletion when a new selection starts.";} + $hash->{HELPER}{RDPFDEL} = $aVal; + } else { + delete $hash->{HELPER}{RDPFDEL} if($hash->{HELPER}{RDPFDEL}); + } + } + + if ($aName eq "sqlCmdHistoryLength") { + if($cmd eq "set") { + $do = ($aVal) ? 1 : 0; + } + $do = 0 if($cmd eq "del"); + if ($do == 0) { + delete($hash->{HELPER}{SQLHIST}); + DbRep_setCmdFile($name."_sqlCmdList","",$hash); # Löschen der sql History Liste im DbRep-Keyfile + } + } + + if ($aName eq "userExitFn") { + if($cmd eq "set") { + if(!$aVal) {return "Usage of $aName is wrong. The function has to be specified as \" [reading:value]\" ";} + my @a = split(/ /,$aVal,2); + $hash->{HELPER}{USEREXITFN} = $a[0]; + $hash->{HELPER}{UEFN_REGEXP} = $a[1] if($a[1]); + } else { + delete $hash->{HELPER}{USEREXITFN} if($hash->{HELPER}{USEREXITFN}); + delete $hash->{HELPER}{UEFN_REGEXP} if($hash->{HELPER}{UEFN_REGEXP}); + } + } + + if ($aName eq "role") { + if($cmd eq "set") { + if ($aVal eq "Agent") { + # check ob bereits ein Agent für die angeschlossene Datenbank existiert -> DbRep-Device kann dann keine Agent-Rolle einnehmen + foreach(devspec2array("TYPE=DbRep")) { + my $devname = $_; + next if($devname eq $name); + my $devrole = $defs{$_}{ROLE}; + my $devdb = $defs{$_}{DATABASE}; + if ($devrole eq "Agent" && $devdb eq $hash->{DATABASE}) { return "There is already an Agent device: $devname defined for database $hash->{DATABASE} !"; } + } + # nicht erlaubte Attribute löschen falls gesetzt + foreach (@agentnoattr) { + delete($attr{$name}{$_}); + } + $attr{$name}{icon} = "security"; + } + $do = $aVal; + } else { + $do = "Client"; + } + $hash->{ROLE} = $do; + $hash->{MODEL} = $hash->{ROLE}; + delete($attr{$name}{icon}) if($do eq "Client"); + } + + if ($cmd eq "set") { + if ($aName =~ /valueFilter/) { + eval { "Hallo" =~ m/$aVal/ }; + return "Bad regexp: $@" if($@); + } + + if ($aName =~ /seqDoubletsVariance/) { + unless (looks_like_number($aVal)) { return " The Value of $aName is not valid. Only figures are allowed !";} + } + + if ($aName eq "timeYearPeriod") { + # 06-01 02-28 + unless ($aVal =~ /^(\d{2})-(\d{2}) (\d{2})-(\d{2})$/ ) + { return "The Value of \"$aName\" isn't valid. Set the account period as \"MM-DD MM-DD\".";} + my ($mm1, $dd1, $mm2, $dd2) = ($aVal =~ /^(\d{2})-(\d{2}) (\d{2})-(\d{2})$/); + my (undef,undef,undef,$mday,$mon,$year1,undef,undef,undef) = localtime(time); # Istzeit Ableitung + my $year2 = $year1; + # a b c d + # 06-01 02-28 , wenn c < a && $mon < a -> Jahr(a)-1, sonst Jahr(c)+1 + my $c = ($mon+1).$mday; + my $e = $mm2.$dd2; + if ($mm2 <= $mm1 && $c <= $e) { + $year1--; + } else { + $year2++; + } + eval { my $t1 = timelocal(00, 00, 00, $dd1, $mm1-1, $year1-1900); + my $t2 = timelocal(00, 00, 00, $dd2, $mm2-1, $year2-1900); }; + if ($@) { + my @l = split (/at/, $@); + return " The Value of $aName is out of range - $l[0]"; + } + delete($attr{$name}{timestamp_begin}) if ($attr{$name}{timestamp_begin}); + delete($attr{$name}{timestamp_end}) if ($attr{$name}{timestamp_end}); + delete($attr{$name}{timeDiffToNow}) if ($attr{$name}{timeDiffToNow}); + delete($attr{$name}{timeOlderThan}) if ($attr{$name}{timeOlderThan}); + return undef; + } + if ($aName eq "timestamp_begin" || $aName eq "timestamp_end") { + my ($a,$b,$c) = split('_',$aVal); + if ($a =~ /^current$|^previous$/ && $b =~ /^hour$|^day$|^week$|^month$|^year$/ && $c =~ /^begin$|^end$/) { + delete($attr{$name}{timeDiffToNow}) if ($attr{$name}{timeDiffToNow}); + delete($attr{$name}{timeOlderThan}) if ($attr{$name}{timeOlderThan}); + delete($attr{$name}{timeYearPeriod}) if ($attr{$name}{timeYearPeriod}); + return undef; + } + $aVal = DbRep_formatpicker($aVal); + unless ($aVal =~ /^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})$/) + {return " The Value of $aName is not valid. Use format YYYY-MM-DD HH:MM:SS or one of \"current_[year|month|day|hour]_begin\",\"current_[year|month|day|hour]_end\", \"previous_[year|month|day|hour]_begin\", \"previous_[year|month|day|hour]_end\" !";} + + my ($yyyy, $mm, $dd, $hh, $min, $sec) = ($aVal =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + + eval { my $epoch_seconds_begin = timelocal($sec, $min, $hh, $dd, $mm-1, $yyyy-1900); }; + + if ($@) { + my @l = split (/at/, $@); + return " The Value of $aName is out of range - $l[0]"; + } + delete($attr{$name}{timeDiffToNow}) if ($attr{$name}{timeDiffToNow}); + delete($attr{$name}{timeOlderThan}) if ($attr{$name}{timeOlderThan}); + delete($attr{$name}{timeYearPeriod}) if ($attr{$name}{timeYearPeriod}); + } + if ($aName =~ /ftpTimeout|timeout|diffAccept/) { + unless ($aVal =~ /^[0-9]+$/) { return " The Value of $aName is not valid. Use only figures 0-9 without decimal places !";} + } + if ($aName eq "readingNameMap") { + unless ($aVal =~ m/^[A-Za-z\d_\.-]+$/) { return " Unsupported character in $aName found. Use only A-Z a-z _ . -";} + } + if ($aName eq "timeDiffToNow") { + unless ($aVal =~ /^[0-9]+$/ || $aVal =~ /^\s*[ydhms]:([\d]+)\s*/ && $aVal !~ /.*,.*/ ) + { return "The Value of \"$aName\" isn't valid. Set simple seconds like \"86400\" or use form like \"y:1 d:10 h:6 m:12 s:20\". Refer to commandref !";} + delete($attr{$name}{timestamp_begin}) if ($attr{$name}{timestamp_begin}); + delete($attr{$name}{timestamp_end}) if ($attr{$name}{timestamp_end}); + delete($attr{$name}{timeYearPeriod}) if ($attr{$name}{timeYearPeriod}); + } + if ($aName eq "timeOlderThan") { + unless ($aVal =~ /^[0-9]+$/ || $aVal =~ /^\s*[ydhms]:([\d]+)\s*/ && $aVal !~ /.*,.*/ ) + { return "The Value of \"$aName\" isn't valid. Set simple seconds like \"86400\" or use form like \"y:1 d:10 h:6 m:12 s:20\". Refer to commandref !";} + delete($attr{$name}{timestamp_begin}) if ($attr{$name}{timestamp_begin}); + delete($attr{$name}{timestamp_end}) if ($attr{$name}{timestamp_end}); + delete($attr{$name}{timeYearPeriod}) if ($attr{$name}{timeYearPeriod}); + } + if ($aName eq "dumpMemlimit" || $aName eq "dumpSpeed") { + unless ($aVal =~ /^[0-9]+$/) { return "The Value of $aName is not valid. Use only figures 0-9 without decimal places.";} + my $dml = AttrVal($name, "dumpMemlimit", 100000); + my $ds = AttrVal($name, "dumpSpeed", 10000); + if($aName eq "dumpMemlimit") { + unless($aVal >= (10 * $ds)) {return "The Value of $aName has to be at least '10 x dumpSpeed' ! ";} + } + if($aName eq "dumpSpeed") { + unless($aVal <= ($dml / 10)) {return "The Value of $aName mustn't be greater than 'dumpMemlimit / 10' ! ";} + } + } + if ($aName eq "ftpUse") { + delete($attr{$name}{ftpUseSSL}); + } + if ($aName eq "ftpUseSSL") { + delete($attr{$name}{ftpUse}); + } + if ($aName eq "reading" || $aName eq "device") { + if ($dbmodel && $dbmodel ne 'SQLITE') { + my $attrname = uc($aName); + if ($dbmodel eq 'POSTGRESQL' && $aVal !~ m/,/) { + return "Length of \"$aName\" is too big. Maximum length for database type $dbmodel is $dbrep_col{$attrname}" if(length($aVal) > $dbrep_col{$attrname}); + } elsif ($dbmodel eq 'MYSQL' && $aVal !~ m/,/) { + return "Length of \"$aName\" is too big. Maximum length for database type $dbmodel is $dbrep_col{$attrname}" if(length($aVal) > $dbrep_col{$attrname}); + } + } + } + + } +return undef; +} + +################################################################################### +# DbRep_Notify Eventverarbeitung +################################################################################### +sub DbRep_Notify($$) { + # Es werden nur die Events von Geräten verarbeitet die im Hash $hash->{NOTIFYDEV} gelistet sind (wenn definiert). + # Dadurch kann die Menge der Events verringert werden. In sub DbRep_Define angeben. + # Beispiele: + # $hash->{NOTIFYDEV} = "global"; + # $hash->{NOTIFYDEV} = "global,Definition_A,Definition_B"; + + my ($own_hash, $dev_hash) = @_; + my $myName = $own_hash->{NAME}; # Name des eigenen Devices + my $devName = $dev_hash->{NAME}; # Device welches Events erzeugt hat + + return if(IsDisabled($myName)); # Return if the module is disabled + + my $events = deviceEvents($dev_hash,0); + return if(!$events); + + foreach my $event (@{$events}) { + $event = "" if(!defined($event)); + my @evl = split("[ \t][ \t]*", $event); + +# if ($devName = $myName && $evl[0] =~ /done/) { +# InternalTimer(time+1, "browser_refresh", $own_hash, 0); +# } + + if ($own_hash->{ROLE} eq "Agent") { + # wenn Rolle "Agent" Verbeitung von RENAMED Events + next if ($event !~ /RENAMED/); + + my $strucChanged; + # altes in neues device in der DEF des angeschlossenen DbLog-device ändern (neues device loggen) + my $dblog_name = $own_hash->{dbloghash}{NAME}; # Name des an den DbRep-Agenten angeschlossenen DbLog-Dev + my $dblog_hash = $defs{$dblog_name}; + + if ( $dblog_hash->{DEF} =~ m/( |\(|\|)$evl[1]( |\)|\||:)/ ) { + $dblog_hash->{DEF} =~ s/$evl[1]/$evl[2]/; + $dblog_hash->{REGEXP} =~ s/$evl[1]/$evl[2]/; + # Definitionsänderung wurde vorgenommen + $strucChanged = 1; + Log3 ($myName, 3, "DbRep Agent $myName - $dblog_name substituted in DEF, old: \"$evl[1]\", new: \"$evl[2]\" "); + } + + # DEVICE innerhalb angeschlossener Datenbank umbenennen + Log3 ($myName, 4, "DbRep Agent $myName - Evt RENAMED rec - old device: $evl[1], new device: $evl[2] -> start deviceRename in DB: $own_hash->{DATABASE} "); + $own_hash->{HELPER}{OLDDEV} = $evl[1]; + $own_hash->{HELPER}{NEWDEV} = $evl[2]; + $own_hash->{HELPER}{RENMODE} = "devren"; + DbRep_Main($own_hash,"deviceRename"); + + # die Attribute "device" in allen DbRep-Devices mit der Datenbank = DB des Agenten von alten Device in neues Device ändern + foreach(devspec2array("TYPE=DbRep")) { + my $repname = $_; + next if($_ eq $myName); + my $repattrdevice = $attr{$_}{device}; + next if(!$repattrdevice); + my $repdb = $defs{$_}{DATABASE}; + if ($repattrdevice eq $evl[1] && $repdb eq $own_hash->{DATABASE}) { + $attr{$_}{device} = $evl[2]; + # Definitionsänderung wurde vorgenommen + $strucChanged = 1; + Log3 ($myName, 3, "DbRep Agent $myName - $_ attr device changed, old: \"$evl[1]\", new: \"$evl[2]\" "); + } + } + # if ($strucChanged) {CommandSave("","")}; + } + } +return; +} + +################################################################################### +# DbRep_Undef +################################################################################### +sub DbRep_Undef($$) { + my ($hash, $arg) = @_; + + RemoveInternalTimer($hash); + + my $dbh = $hash->{DBH}; + $dbh->disconnect() if(defined($dbh)); + + BlockingKill($hash->{HELPER}{RUNNING_PID}) if (exists($hash->{HELPER}{RUNNING_PID})); + BlockingKill($hash->{HELPER}{RUNNING_BACKUP_CLIENT}) if (exists($hash->{HELPER}{RUNNING_BACKUP_CLIENT})); + BlockingKill($hash->{HELPER}{RUNNING_RESTORE}) if (exists($hash->{HELPER}{RUNNING_RESTORE})); + BlockingKill($hash->{HELPER}{RUNNING_BCKPREST_SERVER}) if (exists($hash->{HELPER}{RUNNING_BCKPREST_SERVER})); + BlockingKill($hash->{HELPER}{RUNNING_OPTIMIZE}) if (exists($hash->{HELPER}{RUNNING_OPTIMIZE})); + BlockingKill($hash->{HELPER}{RUNNING_REPAIR}) if (exists($hash->{HELPER}{RUNNING_REPAIR})); + + DbRep_delread($hash,1); + +return undef; +} + +################################################################################### +# DbRep_Shutdown +################################################################################### +sub DbRep_Shutdown($) { + my ($hash) = @_; + + my $dbh = $hash->{DBH}; + $dbh->disconnect() if(defined($dbh)); + DbRep_delread($hash,1); + RemoveInternalTimer($hash); + +return undef; +} + +################################################################################### +# First Init DB Connect +# Verbindung zur DB aufbauen und den Timestamp des ältesten +# Datensatzes ermitteln +################################################################################### +sub DbRep_firstconnect($) { + my ($hash) = @_; + my $name = $hash->{NAME}; + my $to = "120"; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + + RemoveInternalTimer($hash, "DbRep_firstconnect"); + return if(IsDisabled($name)); + if ($init_done == 1) { + Log3 ($name, 3, "DbRep $name - Connectiontest to database $dbconn with user $dbuser") if($hash->{LASTCMD} ne "minTimestamp"); + $hash->{HELPER}{RUNNING_PID} = BlockingCall("DbRep_getMinTs", "$name", "DbRep_getMinTsDone", $to, "DbRep_getMinTsAborted", $hash); + $hash->{HELPER}{RUNNING_PID}{loglevel} = 5 if($hash->{HELPER}{RUNNING_PID}); # Forum #77057 + } else { + InternalTimer(time+1, "DbRep_firstconnect", $hash, 0); + } + +return; +} + +#################################################################################################### +# den ältesten Datensatz (Timestamp) in der DB bestimmen +#################################################################################################### +sub DbRep_getMinTs($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $mintsdef = "1970-01-01 01:00:00"; + my ($dbh,$sql,$err,$mints); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval { $dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 }); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err"; + } + + # SQL-Startzeit + my $st = [gettimeofday]; + + eval { $mints = $dbh->selectrow_array("SELECT min(TIMESTAMP) FROM history;"); }; + # eval { $mints = $dbh->selectrow_array("select TIMESTAMP from history limit 1;"); }; + # eval { $mints = $dbh->selectrow_array("select TIMESTAMP from history order by TIMESTAMP limit 1;"); }; + + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + $mints = $mints?encode_base64($mints,""):encode_base64($mintsdef,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$mints|$rt|0"; +} + +#################################################################################################### +# Auswertungsroutine den ältesten Datensatz (Timestamp) in der DB bestimmen +#################################################################################################### +sub DbRep_getMinTsDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $mints = decode_base64($a[1]); + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $dblogdevice = $hash->{HELPER}{DBLOGDEVICE}; + $hash->{dbloghash} = $defs{$dblogdevice}; + my $dbconn = $hash->{dbloghash}{dbconn}; + + if ($err) { + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, "errortext", $err); + ReadingsBulkUpdateValue ($hash, "state", "disconnected"); + readingsEndUpdate($hash, 1); + delete($hash->{HELPER}{RUNNING_PID}); + Log3 ($name, 2, "DbRep $name - DB connect failed. Make sure credentials of database $hash->{DATABASE} are valid and database is reachable."); + return; + } + + my $state = ($hash->{LASTCMD} eq "minTimestamp")?"done":"connected"; + $state = "invalid timestamp \"$mints\" found in database - please delete it" if($mints =~ /^0000-00-00.*$/); + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, "timestamp_oldest_dataset", $mints) if($hash->{LASTCMD} eq "minTimestamp"); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,$state); + readingsEndUpdate($hash, 1); + + Log3 ($name, 4, "DbRep $name - Connectiontest to db $dbconn successful") if($hash->{LASTCMD} ne "minTimestamp"); + + $hash->{HELPER}{MINTS} = $mints; + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# Abbruchroutine den ältesten Datensatz (Timestamp) in der DB bestimmen +#################################################################################################### +sub DbRep_getMinTsAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name -> BlockingCall $hash->{HELPER}{RUNNING_PID}{fn} pid:$hash->{HELPER}{RUNNING_PID}{pid} $cause"); + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, "errortext", $cause); + ReadingsBulkUpdateValue ($hash, "state", "disconnected"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); +return; +} + +################################################################################################################ +# Hauptroutine +################################################################################################################ +sub DbRep_Main($$;$) { + my ($hash,$opt,$prop) = @_; + my $name = $hash->{NAME}; + my $to = AttrVal($name, "timeout", "86400"); + my $reading = AttrVal($name, "reading", "%"); + my $device = AttrVal($name, "device", "%"); + my $dbloghash = $hash->{dbloghash}; + my $dbmodel = $dbloghash->{MODEL}; + + # Entkommentieren für Testroutine im Vordergrund + # testexit($hash); + + return if( ($hash->{HELPER}{RUNNING_BACKUP_CLIENT} || + $hash->{HELPER}{RUNNING_BCKPREST_SERVER} || + $hash->{HELPER}{RUNNING_RESTORE} || + $hash->{HELPER}{RUNNING_REPAIR} || + $hash->{HELPER}{RUNNING_REDUCELOG} || + $hash->{HELPER}{RUNNING_OPTIMIZE}) && + $opt !~ /dumpMySQL|restoreMySQL|dumpSQLite|restoreSQLite|optimizeTables|vacuum|repairSQLite/ ); + + # Readings löschen die nicht in der Ausnahmeliste (Attr readingPreventFromDel) stehen + DbRep_delread($hash); + + if ($opt =~ /dumpMySQL|dumpSQLite/) { + BlockingKill($hash->{HELPER}{RUNNING_BACKUP_CLIENT}) if (exists($hash->{HELPER}{RUNNING_BACKUP_CLIENT})); + BlockingKill($hash->{HELPER}{RUNNING_BCKPREST_SERVER}) if (exists($hash->{HELPER}{RUNNING_BCKPREST_SERVER})); + BlockingKill($hash->{HELPER}{RUNNING_OPTIMIZE}) if (exists($hash->{HELPER}{RUNNING_OPTIMIZE})); + + if($dbmodel =~ /MYSQL/) { + if ($prop eq "serverSide") { + $hash->{HELPER}{RUNNING_BCKPREST_SERVER} = BlockingCall("mysql_DoDumpServerSide", "$name", "DbRep_DumpDone", $to, "DbRep_DumpAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "serverSide Dump is running - be patient and see Logfile !", 1); + } else { + $hash->{HELPER}{RUNNING_BACKUP_CLIENT} = BlockingCall("mysql_DoDumpClientSide", "$name", "DbRep_DumpDone", $to, "DbRep_DumpAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "clientSide Dump is running - be patient and see Logfile !", 1); + } + } + if($dbmodel =~ /SQLITE/) { + $hash->{HELPER}{RUNNING_BACKUP_CLIENT} = BlockingCall("DbRep_sqliteDoDump", "$name", "DbRep_DumpDone", $to, "DbRep_DumpAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "SQLite Dump is running - be patient and see Logfile !", 1); + } + return; + } + + if ($opt =~ /restoreMySQL/) { + BlockingKill($hash->{HELPER}{RUNNING_RESTORE}) if (exists($hash->{HELPER}{RUNNING_RESTORE})); + BlockingKill($hash->{HELPER}{RUNNING_OPTIMIZE}) if (exists($hash->{HELPER}{RUNNING_OPTIMIZE})); + + if($prop =~ /csv/) { + $hash->{HELPER}{RUNNING_RESTORE} = BlockingCall("mysql_RestoreServerSide", "$name|$prop", "DbRep_restoreDone", $to, "DbRep_restoreAborted", $hash); + } elsif ($prop =~ /sql/) { + $hash->{HELPER}{RUNNING_RESTORE} = BlockingCall("mysql_RestoreClientSide", "$name|$prop", "DbRep_restoreDone", $to, "DbRep_restoreAborted", $hash); + } else { + ReadingsSingleUpdateValue ($hash, "state", "restore database error - unknown fileextension \"$prop\"", 1); + } + + ReadingsSingleUpdateValue ($hash, "state", "restore database is running - be patient and see Logfile !", 1); + return; + } + + if ($opt =~ /restoreSQLite/) { + BlockingKill($hash->{HELPER}{RUNNING_RESTORE}) if (exists($hash->{HELPER}{RUNNING_RESTORE})); + BlockingKill($hash->{HELPER}{RUNNING_OPTIMIZE}) if (exists($hash->{HELPER}{RUNNING_OPTIMIZE})); + $hash->{HELPER}{RUNNING_RESTORE} = BlockingCall("DbRep_sqliteRestore", "$name|$prop", "DbRep_restoreDone", $to, "DbRep_restoreAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "restore database is running - be patient and see Logfile !", 1); + return; + } + + if ($opt =~ /optimizeTables|vacuum/) { + BlockingKill($hash->{HELPER}{RUNNING_OPTIMIZE}) if (exists($hash->{HELPER}{RUNNING_OPTIMIZE})); + BlockingKill($hash->{HELPER}{RUNNING_RESTORE}) if (exists($hash->{HELPER}{RUNNING_RESTORE})); + $hash->{HELPER}{RUNNING_OPTIMIZE} = BlockingCall("DbRep_optimizeTables", "$name", "DbRep_OptimizeDone", $to, "DbRep_OptimizeAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "optimize tables is running - be patient and see Logfile !", 1); + return; + } + + if ($opt =~ /repairSQLite/) { + BlockingKill($hash->{HELPER}{RUNNING_BACKUP_CLIENT}) if (exists($hash->{HELPER}{RUNNING_BACKUP_CLIENT})); + BlockingKill($hash->{HELPER}{RUNNING_OPTIMIZE}) if (exists($hash->{HELPER}{RUNNING_OPTIMIZE})); + BlockingKill($hash->{HELPER}{RUNNING_REPAIR}) if (exists($hash->{HELPER}{RUNNING_REPAIR})); + $hash->{HELPER}{RUNNING_REPAIR} = BlockingCall("DbRep_sqliteRepair", "$name", "DbRep_RepairDone", $to, "DbRep_RepairAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "repair database is running - be patient and see Logfile !", 1); + return; + } + + if (exists($hash->{HELPER}{RUNNING_PID}) && $hash->{ROLE} ne "Agent") { + Log3 ($name, 3, "DbRep $name - WARNING - old process $hash->{HELPER}{RUNNING_PID}{pid} will be killed now to start a new BlockingCall"); + BlockingKill($hash->{HELPER}{RUNNING_PID}); + } + + ReadingsSingleUpdateValue ($hash, "state", "running", 1); + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # Ausgaben und Zeitmanipulationen + Log3 ($name, 4, "DbRep $name - -------- New selection --------- "); + Log3 ($name, 4, "DbRep $name - Command: $opt $prop"); + + # zentrales Timestamp-Array und Zeitgrenzen bereitstellen + my ($epoch_seconds_begin,$epoch_seconds_end,$runtime_string_first,$runtime_string_next); + my $ts = "no_aggregation"; # Dummy für eine Select-Schleife wenn != $IsTimeSet || $IsAggrSet + my ($IsTimeSet,$IsAggrSet,$aggregation) = DbRep_checktimeaggr($hash); + if($IsTimeSet || $IsAggrSet) { + ($epoch_seconds_begin,$epoch_seconds_end,$runtime_string_first,$runtime_string_next,$ts) = DbRep_createTimeArray($hash,$aggregation,$opt); + } else { + Log3 ($name, 4, "DbRep $name - Timestamp begin human readable: not set") if($opt !~ /tableCurrentPurge/); + Log3 ($name, 4, "DbRep $name - Timestamp end human readable: not set") if($opt !~ /tableCurrentPurge/); + } + + Log3 ($name, 4, "DbRep $name - Aggregation: $aggregation") if($opt !~ /tableCurrentPurge|tableCurrentFillup|fetchrows|insert|reduceLog/); + + ##### Funktionsaufrufe ##### + if ($opt eq "sumValue") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("sumval_DoParse", "$name§$device§$reading§$prop§$ts", "sumval_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt =~ m/countEntries/) { + my $table = $prop; + $hash->{HELPER}{RUNNING_PID} = BlockingCall("count_DoParse", "$name§$table§$device§$reading§$ts", "count_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "averageValue") { + Log3 ($name, 4, "DbRep $name - averageValue calculation sceme: ".AttrVal($name,"averageCalcForm","avgArithmeticMean")); + $hash->{HELPER}{RUNNING_PID} = BlockingCall("averval_DoParse", "$name§$device§$reading§$prop§$ts", "averval_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "fetchrows") { + my $table = $prop; + $hash->{HELPER}{RUNNING_PID} = BlockingCall("fetchrows_DoParse", "$name|$table|$device|$reading|$runtime_string_first|$runtime_string_next", "fetchrows_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt =~ /delSeqDoublets/) { + my $cmd = $prop?$prop:"adviceRemain"; + $hash->{HELPER}{RUNNING_PID} = BlockingCall("delseqdoubl_DoParse", "$name§$cmd§$device§$reading§$ts", "delseqdoubl_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "exportToFile") { + my $file = $prop; + DbRep_beforeproc($hash, "export"); + $hash->{HELPER}{RUNNING_PID} = BlockingCall("expfile_DoParse", "$name§$device§$reading§$runtime_string_first§$file§$ts", "expfile_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "importFromFile") { + my $file = $prop; + DbRep_beforeproc($hash, "import"); + $hash->{HELPER}{RUNNING_PID} = BlockingCall("impfile_Push", "$name|$runtime_string_first|$file", "impfile_PushDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "maxValue") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("maxval_DoParse", "$name§$device§$reading§$prop§$ts", "maxval_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "minValue") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("minval_DoParse", "$name§$device§$reading§$prop§$ts", "minval_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "delEntries") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("del_DoParse", "$name|history|$device|$reading|$runtime_string_first|$runtime_string_next", "del_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "tableCurrentPurge") { + undef $runtime_string_first; + undef $runtime_string_next; + $hash->{HELPER}{RUNNING_PID} = BlockingCall("del_DoParse", "$name|current|$device|$reading|$runtime_string_first|$runtime_string_next", "del_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "tableCurrentFillup") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("currentfillup_Push", "$name|$device|$reading|$runtime_string_first|$runtime_string_next", "currentfillup_Done", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "diffValue") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("diffval_DoParse", "$name§$device§$reading§$prop§$ts", "diffval_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt eq "insert") { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("insert_Push", "$name", "insert_Done", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt =~ /deviceRename|readingRename/) { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("change_Push", "$name|$device|$reading|$runtime_string_first|$runtime_string_next", "change_Done", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt =~ /changeValue/) { + $hash->{HELPER}{RUNNING_PID} = BlockingCall("changeval_Push", "$name§$device§$reading§$runtime_string_first§$runtime_string_next§$ts", "change_Done", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt =~ /sqlCmd|sqlSpecial/ ) { + # Execute a generic sql command or special sql + if ($opt =~ /sqlSpecial/) { + if($prop eq "50mostFreqLogsLast2days") { + $prop = "select Device, reading, count(0) AS `countA` from history where ( TIMESTAMP > (now() - interval 2 day)) group by DEVICE, READING order by countA desc, DEVICE limit 50;" if($dbmodel =~ /MYSQL/); + $prop = "select Device, reading, count(0) AS `countA` from history where ( TIMESTAMP > ('now' - '2 days')) group by DEVICE, READING order by countA desc, DEVICE limit 50;" if($dbmodel =~ /SQLITE/); + $prop = "select Device, reading, count(0) AS countA from history where ( TIMESTAMP > (NOW() - INTERVAL '2' DAY)) group by DEVICE, READING order by countA desc, DEVICE limit 50;" if($dbmodel =~ /POSTGRESQL/); + } elsif ($prop eq "allDevReadCount") { + $prop = "select device, reading, count(*) from history group by DEVICE, READING;"; + } elsif ($prop eq "allDevCount") { + $prop = "select device, count(*) from history group by DEVICE;"; + } + } + $hash->{HELPER}{RUNNING_PID} = BlockingCall("sqlCmd_DoParse", "$name|$opt|$runtime_string_first|$runtime_string_next|$prop", "sqlCmd_ParseDone", $to, "DbRep_ParseAborted", $hash); + + } elsif ($opt =~ /syncStandby/ ) { + # Befehl vor Procedure ausführen + DbRep_beforeproc($hash, "syncStandby"); + $hash->{HELPER}{RUNNING_PID} = BlockingCall("DbRep_syncStandby", "$name§$device§$reading§$runtime_string_first§$runtime_string_next§$ts§$prop", "DbRep_syncStandbyDone", $to, "DbRep_ParseAborted", $hash); + } + + if ($opt =~ /reduceLog/) { + $hash->{HELPER}{RUNNING_REDUCELOG} = BlockingCall("DbRep_reduceLog", "$name|$runtime_string_first|$runtime_string_next", "DbRep_reduceLogDone", $to, "DbRep_reduceLogAborted", $hash); + ReadingsSingleUpdateValue ($hash, "state", "reduceLog database is running - be patient and see Logfile !", 1); + $hash->{HELPER}{RUNNING_REDUCELOG}{loglevel} = 5 if($hash->{HELPER}{RUNNING_REDUCELOG}); # Forum #77057 + return; + } + +$hash->{HELPER}{RUNNING_PID}{loglevel} = 5 if($hash->{HELPER}{RUNNING_PID}); # Forum #77057 +return; +} + +################################################################################################################ +# Create zentrales Timsstamp-Array +################################################################################################################ +sub DbRep_createTimeArray($$$) { + my ($hash,$aggregation,$opt) = @_; + my $name = $hash->{NAME}; + + # year als Jahre seit 1900 + # $mon als 0..11 + # $time = timelocal( $sec, $min, $hour, $mday, $mon, $year ); + my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time); # Istzeit Ableitung + my ($tsbegin,$tsend,$dim,$tsub,$tadd); + my ($rsec,$rmin,$rhour,$rmday,$rmon,$ryear); + + + # absolute Auswertungszeiträume statische und dynamische (Beginn / Ende) berechnen + if($hash->{HELPER}{MINTS} && $hash->{HELPER}{MINTS} =~ m/0000-00-00/) { + Log3 ($name, 1, "DbRep $name - ERROR - wrong timestamp \"$hash->{HELPER}{MINTS}\" found in database. Please delete it !"); + delete $hash->{HELPER}{MINTS}; + } + + my $mints = $hash->{HELPER}{MINTS}?$hash->{HELPER}{MINTS}:"1970-01-01 01:00:00"; # Timestamp des 1. Datensatzes verwenden falls ermittelt + $tsbegin = AttrVal($name, "timestamp_begin", $mints); + $tsbegin = DbRep_formatpicker($tsbegin); + $tsend = AttrVal($name, "timestamp_end", strftime "%Y-%m-%d %H:%M:%S", localtime(time)); + $tsend = DbRep_formatpicker($tsend); + + if ( my $tap = AttrVal($name, "timeYearPeriod", undef)) { + # a b c d + # 06-01 02-28 , wenn c < a && $mon < a -> Jahr(a)-1, sonst Jahr(c)+1 + my $ybp = $year+1900; + my $yep = $year+1900; + $tap =~ qr/^(\d{2})-(\d{2}) (\d{2})-(\d{2})$/p; + my $mbp = $1; + my $dbp = $2; + my $mep = $3; + my $dep = $4; + my $c = ($mon+1).$mday; + my $e = $mep.$dep; + if ($mep <= $mbp && $c <= $e) { + $ybp--; + } else { + $yep++; + } + $tsbegin = "$ybp-$mbp-$dbp 00:00:00"; + $tsend = "$yep-$mep-$dep 23:59:59"; + } + + if (AttrVal($name,"timestamp_begin","") eq "current_year_begin" || + AttrVal($name,"timestamp_end","") eq "current_year_begin") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,0,$year)) if(AttrVal($name,"timestamp_begin","") eq "current_year_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,0,$year)) if(AttrVal($name,"timestamp_end","") eq "current_year_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_year_end" || + AttrVal($name, "timestamp_end", "") eq "current_year_end") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,31,11,$year)) if(AttrVal($name,"timestamp_begin","") eq "current_year_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,31,11,$year)) if(AttrVal($name,"timestamp_end","") eq "current_year_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_year_begin" || + AttrVal($name, "timestamp_end", "") eq "previous_year_begin") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,0,$year-1)) if(AttrVal($name, "timestamp_begin", "") eq "previous_year_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,0,$year-1)) if(AttrVal($name, "timestamp_end", "") eq "previous_year_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_year_end" || + AttrVal($name, "timestamp_end", "") eq "previous_year_end") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,31,11,$year-1)) if(AttrVal($name, "timestamp_begin", "") eq "previous_year_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,31,11,$year-1)) if(AttrVal($name, "timestamp_end", "") eq "previous_year_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_month_begin" || + AttrVal($name, "timestamp_end", "") eq "current_month_begin") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,$mon,$year)) if(AttrVal($name, "timestamp_begin", "") eq "current_month_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,$mon,$year)) if(AttrVal($name, "timestamp_end", "") eq "current_month_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_month_end" || + AttrVal($name, "timestamp_end", "") eq "current_month_end") { + $dim = $mon-1?30+(($mon+1)*3%7<4):28+!($year%4||$year%400*!($year%100)); + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$dim,$mon,$year)) if(AttrVal($name, "timestamp_begin", "") eq "current_month_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$dim,$mon,$year)) if(AttrVal($name, "timestamp_end", "") eq "current_month_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_month_begin" || + AttrVal($name, "timestamp_end", "") eq "previous_month_begin") { + $ryear = ($mon-1<0)?$year-1:$year; + $rmon = ($mon-1<0)?11:$mon-1; + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_month_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,1,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_month_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_month_end" || + AttrVal($name, "timestamp_end", "") eq "previous_month_end") { + $ryear = ($mon-1<0)?$year-1:$year; + $rmon = ($mon-1<0)?11:$mon-1; + $dim = $rmon-1?30+(($rmon+1)*3%7<4):28+!($ryear%4||$ryear%400*!($ryear%100)); + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$dim,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_month_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$dim,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_month_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_week_begin" || + AttrVal($name, "timestamp_end", "") eq "current_week_begin") { + $tsub = 0 if($wday == 1); # wenn Start am "Mo" keine Korrektur + $tsub = 86400 if($wday == 2); # wenn Start am "Di" dann Korrektur -1 Tage + $tsub = 172800 if($wday == 3); # wenn Start am "Mi" dann Korrektur -2 Tage + $tsub = 259200 if($wday == 4); # wenn Start am "Do" dann Korrektur -3 Tage + $tsub = 345600 if($wday == 5); # wenn Start am "Fr" dann Korrektur -4 Tage + $tsub = 432000 if($wday == 6); # wenn Start am "Sa" dann Korrektur -5 Tage + $tsub = 518400 if($wday == 0); # wenn Start am "So" dann Korrektur -6 Tage + ($rsec,$rmin,$rhour,$rmday,$rmon,$ryear) = localtime(time-$tsub); + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "current_week_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "current_week_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_week_end" || + AttrVal($name, "timestamp_end", "") eq "current_week_end") { + $tadd = 518400 if($wday == 1); # wenn Start am "Mo" dann Korrektur +6 Tage + $tadd = 432000 if($wday == 2); # wenn Start am "Di" dann Korrektur +5 Tage + $tadd = 345600 if($wday == 3); # wenn Start am "Mi" dann Korrektur +4 Tage + $tadd = 259200 if($wday == 4); # wenn Start am "Do" dann Korrektur +3 Tage + $tadd = 172800 if($wday == 5); # wenn Start am "Fr" dann Korrektur +2 Tage + $tadd = 86400 if($wday == 6); # wenn Start am "Sa" dann Korrektur +1 Tage + $tadd = 0 if($wday == 0); # wenn Start am "So" keine Korrektur + ($rsec,$rmin,$rhour,$rmday,$rmon,$ryear) = localtime(time+$tadd); + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "current_week_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "current_week_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_week_begin" || + AttrVal($name, "timestamp_end", "") eq "previous_week_begin") { + $tsub = 604800 if($wday == 1); # wenn Start am "Mo" dann Korrektur -7 Tage + $tsub = 691200 if($wday == 2); # wenn Start am "Di" dann Korrektur -8 Tage + $tsub = 777600 if($wday == 3); # wenn Start am "Mi" dann Korrektur -9 Tage + $tsub = 864000 if($wday == 4); # wenn Start am "Do" dann Korrektur -10 Tage + $tsub = 950400 if($wday == 5); # wenn Start am "Fr" dann Korrektur -11 Tage + $tsub = 1036800 if($wday == 6); # wenn Start am "Sa" dann Korrektur -12 Tage + $tsub = 1123200 if($wday == 0); # wenn Start am "So" dann Korrektur -13 Tage + ($rsec,$rmin,$rhour,$rmday,$rmon,$ryear) = localtime(time-$tsub); + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_week_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_week_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_week_end" || + AttrVal($name, "timestamp_end", "") eq "previous_week_end") { + $tsub = 86400 if($wday == 1); # wenn Start am "Mo" dann Korrektur -1 Tage + $tsub = 172800 if($wday == 2); # wenn Start am "Di" dann Korrektur -2 Tage + $tsub = 259200 if($wday == 3); # wenn Start am "Mi" dann Korrektur -3 Tage + $tsub = 345600 if($wday == 4); # wenn Start am "Do" dann Korrektur -4 Tage + $tsub = 432000 if($wday == 5); # wenn Start am "Fr" dann Korrektur -5 Tage + $tsub = 518400 if($wday == 6); # wenn Start am "Sa" dann Korrektur -6 Tage + $tsub = 604800 if($wday == 0); # wenn Start am "So" dann Korrektur -7 Tage + ($rsec,$rmin,$rhour,$rmday,$rmon,$ryear) = localtime(time-$tsub); + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_week_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_week_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_day_begin" || + AttrVal($name, "timestamp_end", "") eq "current_day_begin") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$mday,$mon,$year)) if(AttrVal($name, "timestamp_begin", "") eq "current_day_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$mday,$mon,$year)) if(AttrVal($name, "timestamp_end", "") eq "current_day_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_day_end" || + AttrVal($name, "timestamp_end", "") eq "current_day_end") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$mday,$mon,$year)) if(AttrVal($name, "timestamp_begin", "") eq "current_day_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$mday,$mon,$year)) if(AttrVal($name, "timestamp_end", "") eq "current_day_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_day_begin" || + AttrVal($name, "timestamp_end", "") eq "previous_day_begin") { + $rmday = $mday-1; + $rmon = $mon; + $ryear = $year; + if($rmday<1) { + $rmon--; + if ($rmon<0) { + $rmon=11; + $ryear--; + } + $rmday = $rmon-1?30+(($rmon+1)*3%7<4):28+!($ryear%4||$ryear%400*!($ryear%100)); # Achtung: Monat als 1...12 (statt 0...11) + } + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_day_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,0,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_day_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_day_end" || + AttrVal($name, "timestamp_end", "") eq "previous_day_end") { + $rmday = $mday-1; + $rmon = $mon; + $ryear = $year; + if($rmday<1) { + $rmon--; + if ($rmon<0) { + $rmon=11; + $ryear--; + } + $rmday = $rmon-1?30+(($rmon+1)*3%7<4):28+!($ryear%4||$ryear%400*!($ryear%100)); # Achtung: Monat als 1...12 (statt 0...11) + } + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_day_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,23,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_day_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_hour_begin" || + AttrVal($name, "timestamp_end", "") eq "current_hour_begin") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,$hour,$mday,$mon,$year)) if(AttrVal($name, "timestamp_begin", "") eq "current_hour_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,$hour,$mday,$mon,$year)) if(AttrVal($name, "timestamp_end", "") eq "current_hour_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "current_hour_end" || + AttrVal($name, "timestamp_end", "") eq "current_hour_end") { + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,$hour,$mday,$mon,$year)) if(AttrVal($name, "timestamp_begin", "") eq "current_hour_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,$hour,$mday,$mon,$year)) if(AttrVal($name, "timestamp_end", "") eq "current_hour_end"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_hour_begin" || + AttrVal($name, "timestamp_end", "") eq "previous_hour_begin") { + $rhour = $hour-1; + $rmday = $mday; + $rmon = $mon; + $ryear = $year; + if($rhour<0) { + $rhour = 23; + $rmday = $mday-1; + if($rmday<1) { + $rmon--; + if ($rmon<0) { + $rmon=11; + $ryear--; + } + $rmday = $rmon-1?30+(($rmon+1)*3%7<4):28+!($ryear%4||$ryear%400*!($ryear%100)); # Achtung: Monat als 1...12 (statt 0...11) + } + } + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,$rhour,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_hour_begin"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(0,0,$rhour,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_hour_begin"); + } + + if (AttrVal($name, "timestamp_begin", "") eq "previous_hour_end" || + AttrVal($name, "timestamp_end", "") eq "previous_hour_end") { + $rhour = $hour-1; + $rmday = $mday; + $rmon = $mon; + $ryear = $year; + if($rhour<0) { + $rhour = 23; + $rmday = $mday-1; + if($rmday<1) { + $rmon--; + if ($rmon<0) { + $rmon=11; + $ryear--; + } + $rmday = $rmon-1?30+(($rmon+1)*3%7<4):28+!($ryear%4||$ryear%400*!($ryear%100)); # Achtung: Monat als 1...12 (statt 0...11) + } + } + $tsbegin = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,$rhour,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_begin", "") eq "previous_hour_end"); + $tsend = strftime "%Y-%m-%d %T",localtime(timelocal(59,59,$rhour,$rmday,$rmon,$ryear)) if(AttrVal($name, "timestamp_end", "") eq "previous_hour_end"); + } + + # extrahieren der Einzelwerte von Datum/Zeit Beginn + my ($yyyy1, $mm1, $dd1, $hh1, $min1, $sec1) = ($tsbegin =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + # extrahieren der Einzelwerte von Datum/Zeit Ende + my ($yyyy2, $mm2, $dd2, $hh2, $min2, $sec2) = ($tsend =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + + + # relative Auswertungszeit Beginn berücksichtigen # Umwandeln in Epochesekunden Beginn + my $epoch_seconds_begin = timelocal($sec1, $min1, $hh1, $dd1, $mm1-1, $yyyy1-1900) if($tsbegin); + my ($timeolderthan,$timedifftonow) = DbRep_normRelTime($hash); + + if($timedifftonow) { + $epoch_seconds_begin = time() - $timedifftonow; + Log3 ($name, 4, "DbRep $name - Time difference to current time for calculating Timestamp begin: $timedifftonow sec"); + } elsif ($timeolderthan) { + my $mints = $hash->{HELPER}{MINTS}?$hash->{HELPER}{MINTS}:"1970-01-01 01:00:00"; # Timestamp des 1. Datensatzes verwenden falls ermittelt + $mints =~ /^(\d+)-(\d+)-(\d+)\s(\d+):(\d+):(\d+)$/; + $epoch_seconds_begin = timelocal($6, $5, $4, $3, $2-1, $1-1900); + } + + my $tsbegin_string = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_begin); + Log3 ($name, 5, "DbRep $name - Timestamp begin epocheseconds: $epoch_seconds_begin") if($opt !~ /tableCurrentPurge/); + Log3 ($name, 4, "DbRep $name - Timestamp begin human readable: $tsbegin_string") if($opt !~ /tableCurrentPurge/); + + + # relative Auswertungszeit Ende berücksichtigen # Umwandeln in Epochesekunden Endezeit + my $epoch_seconds_end = timelocal($sec2, $min2, $hh2, $dd2, $mm2-1, $yyyy2-1900); + + $epoch_seconds_end = $timeolderthan ? (time() - $timeolderthan) : $epoch_seconds_end; + + #$epoch_seconds_end = AttrVal($name, "timeOlderThan", undef) ? + # (time() - AttrVal($name, "timeOlderThan", undef)) : $epoch_seconds_end; + Log3 ($name, 4, "DbRep $name - Time difference to current time for calculating Timestamp end: $timeolderthan sec") if(AttrVal($name, "timeOlderThan", undef)); + + my $tsend_string = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + + Log3 ($name, 5, "DbRep $name - Timestamp end epocheseconds: $epoch_seconds_end") if($opt !~ /tableCurrentPurge/); + Log3 ($name, 4, "DbRep $name - Timestamp end human readable: $tsend_string") if($opt !~ /tableCurrentPurge/); + + + # Erstellung Wertehash für Aggregationen + my $runtime = $epoch_seconds_begin; # Schleifenlaufzeit auf Beginn der Zeitselektion setzen + my $runtime_string; # Datum/Zeit im SQL-Format für Readingname Teilstring + my $runtime_string_first; # Datum/Zeit Auswertungsbeginn im SQL-Format für SQL-Statement + my $runtime_string_next; # Datum/Zeit + Periode (Granularität) für Auswertungsende im SQL-Format + my $reading_runtime_string; # zusammengesetzter Readingname+Aggregation für Update + my $tsstr = strftime "%H:%M:%S", localtime($runtime); # für Berechnung Tagesverschieber / Stundenverschieber + my $testr = strftime "%H:%M:%S", localtime($epoch_seconds_end); # für Berechnung Tagesverschieber / Stundenverschieber + my $dsstr = strftime "%Y-%m-%d", localtime($runtime); # für Berechnung Tagesverschieber / Stundenverschieber + my $destr = strftime "%Y-%m-%d", localtime($epoch_seconds_end); # für Berechnung Tagesverschieber / Stundenverschieber + my $msstr = strftime "%m", localtime($runtime); # Startmonat für Berechnung Monatsverschieber + my $mestr = strftime "%m", localtime($epoch_seconds_end); # Endemonat für Berechnung Monatsverschieber + my $ysstr = strftime "%Y", localtime($runtime); # Startjahr für Berechnung Monatsverschieber + my $yestr = strftime "%Y", localtime($epoch_seconds_end); # Endejahr für Berechnung Monatsverschieber + + my $wd = strftime "%a", localtime($runtime); # Wochentag des aktuellen Startdatum/Zeit + my $wdadd = 604800 if($wd eq "Mo"); # wenn Start am "Mo" dann nächste Grenze +7 Tage + $wdadd = 518400 if($wd eq "Di"); # wenn Start am "Di" dann nächste Grenze +6 Tage + $wdadd = 432000 if($wd eq "Mi"); # wenn Start am "Mi" dann nächste Grenze +5 Tage + $wdadd = 345600 if($wd eq "Do"); # wenn Start am "Do" dann nächste Grenze +4 Tage + $wdadd = 259200 if($wd eq "Fr"); # wenn Start am "Fr" dann nächste Grenze +3 Tage + $wdadd = 172800 if($wd eq "Sa"); # wenn Start am "Sa" dann nächste Grenze +2 Tage + $wdadd = 86400 if($wd eq "So"); # wenn Start am "So" dann nächste Grenze +1 Tage + + Log3 ($name, 5, "DbRep $name - weekday of start for selection: $wd -> wdadd: $wdadd") if($wdadd); + + my $aggsec; + if ($aggregation eq "hour") { + $aggsec = 3600; + } elsif ($aggregation eq "day") { + $aggsec = 86400; + } elsif ($aggregation eq "week") { + $aggsec = 604800; + } elsif ($aggregation eq "month") { + $aggsec = 2678400; # Initialwert, wird in DbRep_collaggstr für jeden Monat berechnet + } elsif ($aggregation eq "no") { + $aggsec = 1; + } else { + return; + } + +my %cv = ( + tsstr => $tsstr, + testr => $testr, + dsstr => $dsstr, + destr => $destr, + msstr => $msstr, + mestr => $mestr, + ysstr => $ysstr, + yestr => $yestr, + aggsec => $aggsec, + aggregation => $aggregation, + epoch_seconds_end => $epoch_seconds_end, + wdadd => $wdadd +); +$hash->{HELPER}{CV} = \%cv; + + my $ts; # für Erstellung Timestamp-Array zur nonblocking SQL-Abarbeitung + my $i = 1; # Schleifenzähler -> nur Indikator für ersten Durchlauf -> anderer $runtime_string_first + my $ll; # loopindikator, wenn 1 = loopausstieg + + # Aufbau Timestampstring mit Zeitgrenzen entsprechend Aggregation + while (!$ll) { + # collect aggregation strings + ($runtime,$runtime_string,$runtime_string_first,$runtime_string_next,$ll) = DbRep_collaggstr($hash,$runtime,$i,$runtime_string_next); + $ts .= $runtime_string."#".$runtime_string_first."#".$runtime_string_next."|"; + $i++; + } + +return ($epoch_seconds_begin,$epoch_seconds_end,$runtime_string_first,$runtime_string_next,$ts); +} + +#################################################################################################### +# Zusammenstellung Aggregationszeiträume +#################################################################################################### +sub DbRep_collaggstr($$$$) { + my ($hash,$runtime,$i,$runtime_string_next) = @_; + my $name = $hash->{NAME}; + my $runtime_string; # Datum/Zeit im SQL-Format für Readingname Teilstring + my $runtime_string_first; # Datum/Zeit Auswertungsbeginn im SQL-Format für SQL-Statement + my $ll; # loopindikator, wenn 1 = loopausstieg + my $runtime_orig; # orig. runtime als Grundlage für Addition mit $aggsec + my $tsstr = $hash->{HELPER}{CV}{tsstr}; # für Berechnung Tagesverschieber / Stundenverschieber + my $testr = $hash->{HELPER}{CV}{testr}; # für Berechnung Tagesverschieber / Stundenverschieber + my $dsstr = $hash->{HELPER}{CV}{dsstr}; # für Berechnung Tagesverschieber / Stundenverschieber + my $destr = $hash->{HELPER}{CV}{destr}; # für Berechnung Tagesverschieber / Stundenverschieber + my $msstr = $hash->{HELPER}{CV}{msstr}; # Startmonat für Berechnung Monatsverschieber + my $mestr = $hash->{HELPER}{CV}{mestr}; # Endemonat für Berechnung Monatsverschieber + my $ysstr = $hash->{HELPER}{CV}{ysstr}; # Startjahr für Berechnung Monatsverschieber + my $yestr = $hash->{HELPER}{CV}{yestr}; # Endejahr für Berechnung Monatsverschieber + my $aggregation = $hash->{HELPER}{CV}{aggregation}; # Aggregation + my $aggsec = $hash->{HELPER}{CV}{aggsec}; # laufende Aggregationssekunden + my $epoch_seconds_end = $hash->{HELPER}{CV}{epoch_seconds_end}; + my $wdadd = $hash->{HELPER}{CV}{wdadd}; # Ergänzungstage. Starttag + Ergänzungstage = der folgende Montag (für week-Aggregation) + + # only for this block because of warnings if some values not set + no warnings 'uninitialized'; + + # keine Aggregation (all between timestamps) + if ($aggregation eq "no") { + $runtime_string = "no_aggregation"; # für Readingname + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime); + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll = 1; + } + + # Monatsaggregation + if ($aggregation eq "month") { + $runtime_orig = $runtime; + + # Hilfsrechnungen + my $rm = strftime "%m", localtime($runtime); # Monat des aktuell laufenden Startdatums d. SQL-Select + my $ry = strftime "%Y", localtime($runtime); # Jahr des aktuell laufenden Startdatums d. SQL-Select + my $dim = $rm-2?30+($rm*3%7<4):28+!($ry%4||$ry%400*!($ry%100)); # Anzahl Tage des aktuell laufenden Monats + Log3 ($name, 5, "DbRep $name - act year: $ry, act month: $rm, days in month: $dim, endyear: $yestr, endmonth: $mestr"); + $aggsec = $dim * 86400; + + $runtime = $runtime+3600 if(DbRep_dsttest($hash,$runtime,$aggsec) && (strftime "%m", localtime($runtime)) > 6); # Korrektur Winterzeitumstellung (Uhr wurde 1 Stunde zurück gestellt) + + $runtime_string = strftime "%Y-%m", localtime($runtime); # für Readingname + + if ($i==1) { + # nur im ersten Durchlauf + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime_orig); + } + + if ($ysstr == $yestr && $msstr == $mestr || $ry == $yestr && $rm == $mestr) { + $runtime_string_first = strftime "%Y-%m-01", localtime($runtime) if($i>1); + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll=1; + + } else { + if(($runtime) > $epoch_seconds_end) { + #$runtime_string_first = strftime "%Y-%m-01", localtime($runtime) if($i>11); # ausgebaut 24.02.2018 + $runtime_string_first = strftime "%Y-%m-01", localtime($runtime); + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll=1; + } else { + $runtime_string_first = strftime "%Y-%m-01", localtime($runtime) if($i>1); + $runtime_string_next = strftime "%Y-%m-01", localtime($runtime+($dim*86400)); + + } + } + my ($yyyy1, $mm1, $dd1) = ($runtime_string_next =~ /(\d+)-(\d+)-(\d+)/); + $runtime = timelocal("00", "00", "00", "01", $mm1-1, $yyyy1-1900); + + # neue Beginnzeit in Epoche-Sekunden + $runtime = $runtime_orig+$aggsec; + } + + # Wochenaggregation + if ($aggregation eq "week") { + $runtime = $runtime+3600 if($i!=1 && DbRep_dsttest($hash,$runtime,$aggsec) && (strftime "%m", localtime($runtime)) > 6); # Korrektur Winterzeitumstellung (Uhr wurde 1 Stunde zurück gestellt) + $runtime_orig = $runtime; + + my $w = strftime "%V", localtime($runtime); # Wochennummer des aktuellen Startdatum/Zeit + $runtime_string = "week_".$w; # für Readingname + my $ms = strftime "%m", localtime($runtime); # Startmonat (01-12) + my $me = strftime "%m", localtime($epoch_seconds_end); # Endemonat (01-12) + + if ($i==1) { + # nur im ersten Schleifendurchlauf + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime); + + # Korrektur $runtime_orig für Berechnung neue Beginnzeit für nächsten Durchlauf + my ($yyyy1, $mm1, $dd1) = ($runtime_string_first =~ /(\d+)-(\d+)-(\d+)/); + $runtime = timelocal("00", "00", "00", $dd1, $mm1-1, $yyyy1-1900); + $runtime = $runtime+3600 if(DbRep_dsttest($hash,$runtime,$aggsec) && (strftime "%m", localtime($runtime)) > 6); # Korrektur Winterzeitumstellung (Uhr wurde 1 Stunde zurück gestellt) + $runtime = $runtime+$wdadd; + $runtime_orig = $runtime-$aggsec; + + # die Woche Beginn ist gleich der Woche vom Ende Auswertung + if((strftime "%V", localtime($epoch_seconds_end)) eq ($w) && ($ms+$me != 13)) { + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll=1; + } else { + $runtime_string_next = strftime "%Y-%m-%d", localtime($runtime); + } + } else { + # weitere Durchläufe + if(($runtime+$aggsec) > $epoch_seconds_end) { + $runtime_string_first = strftime "%Y-%m-%d", localtime($runtime_orig); + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll=1; + } else { + $runtime_string_first = strftime "%Y-%m-%d", localtime($runtime_orig) ; + $runtime_string_next = strftime "%Y-%m-%d", localtime($runtime+$aggsec); + } + } + + # neue Beginnzeit in Epoche-Sekunden + $runtime = $runtime_orig+$aggsec; + } + + # Tagesaggregation + if ($aggregation eq "day") { + $runtime_string = strftime "%Y-%m-%d", localtime($runtime); # für Readingname + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime) if($i==1); + $runtime_string_first = strftime "%Y-%m-%d", localtime($runtime) if($i>1); + $runtime = $runtime+3600 if(DbRep_dsttest($hash,$runtime,$aggsec) && (strftime "%m", localtime($runtime)) > 6); # Korrektur Winterzeitumstellung (Uhr wurde 1 Stunde zurück gestellt) + + if((($tsstr gt $testr) ? $runtime : ($runtime+$aggsec)) > $epoch_seconds_end) { + $runtime_string_first = strftime "%Y-%m-%d", localtime($runtime); + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime) if( $dsstr eq $destr); + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll=1; + } else { + $runtime_string_next = strftime "%Y-%m-%d", localtime($runtime+$aggsec); + } + Log3 ($name, 5, "DbRep $name - runtime_string: $runtime_string, runtime_string_first: $runtime_string_first, runtime_string_next: $runtime_string_next"); + + # neue Beginnzeit in Epoche-Sekunden + $runtime = $runtime+$aggsec; + } + + # Stundenaggregation + if ($aggregation eq "hour") { + $runtime_string = strftime "%Y-%m-%d_%H", localtime($runtime); # für Readingname + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime) if($i==1); + $runtime = $runtime+3600 if(DbRep_dsttest($hash,$runtime,$aggsec) && (strftime "%m", localtime($runtime)) > 6); # Korrektur Winterzeitumstellung (Uhr wurde 1 Stunde zurück gestellt) + $runtime_string_first = strftime "%Y-%m-%d %H", localtime($runtime) if($i>1); + + my @a = split (":",$tsstr); + my $hs = $a[0]; + my $msstr = $a[1].":".$a[2]; + @a = split (":",$testr); + my $he = $a[0]; + my $mestr = $a[1].":".$a[2]; + + if((($msstr gt $mestr) ? $runtime : ($runtime+$aggsec)) > $epoch_seconds_end) { + $runtime_string_first = strftime "%Y-%m-%d %H", localtime($runtime); + $runtime_string_first = strftime "%Y-%m-%d %H:%M:%S", localtime($runtime) if( $dsstr eq $destr && $hs eq $he); + $runtime_string_next = strftime "%Y-%m-%d %H:%M:%S", localtime($epoch_seconds_end); + $ll=1; + } else { + $runtime_string_next = strftime "%Y-%m-%d %H", localtime($runtime+$aggsec); + } + + # neue Beginnzeit in Epoche-Sekunden + $runtime = $runtime+$aggsec; + } + +return ($runtime,$runtime_string,$runtime_string_first,$runtime_string_next,$ll); +} + +#################################################################################################### +# nichtblockierende DB-Abfrage averageValue +#################################################################################################### +sub averval_DoParse($) { + my ($string) = @_; + my ($name,$device,$reading,$prop,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $acf = AttrVal($name, "averageCalcForm", "avgArithmeticMean"); # Festlegung Berechnungsschema f. Mittelwert + my $qlf = "avg"; + my ($dbh,$sql,$sth,$err,$selspec,$addon); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$device|$reading|''|$err|''"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + if($acf eq "avgArithmeticMean") { + # arithmetischer Mittelwert + # vorbereiten der DB-Abfrage, DB-Modell-abhaengig + $addon = ''; + if ($dbloghash->{MODEL} eq "POSTGRESQL") { + $selspec = "AVG(VALUE::numeric)"; + } elsif ($dbloghash->{MODEL} eq "MYSQL") { + $selspec = "AVG(VALUE)"; + } elsif ($dbloghash->{MODEL} eq "SQLITE") { + $selspec = "AVG(VALUE)"; + } else { + $selspec = "AVG(VALUE)"; + } + $qlf = "avgam"; + } elsif ($acf eq "avgDailyMeanGWS") { + # Tagesmittelwert Temperaturen nach Schema des deutschen Wetterdienstes + # SELECT VALUE FROM history WHERE DEVICE="MyWetter" AND READING="temperature" AND TIMESTAMP >= "2018-01-28 $i:00:00" AND TIMESTAMP <= "2018-01-28 ($i+1):00:00" ORDER BY TIMESTAMP DESC LIMIT 1; + $addon = "ORDER BY TIMESTAMP DESC LIMIT 1"; + $selspec = "VALUE"; + $qlf = "avgdmgws"; + } elsif ($acf eq "avgTimeWeightMean") { + $addon = "ORDER BY TIMESTAMP ASC"; + $selspec = "TIMESTAMP,VALUE"; + $qlf = "avgtwm"; + } + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + my $arrstr; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + if($acf eq "avgArithmeticMean") { + # arithmetischer Mittelwert (Standard) + # + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addon); + } else { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,undef,undef,$addon); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + + my @line = $sth->fetchrow_array(); + + Log3 ($name, 5, "DbRep $name - SQL result: $line[0]") if($line[0]); + + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + $arrstr .= $runtime_string."#".$line[0]."#".$rsf[0]."_".$rsf[1]."|"; + } else { + my @rsf = split(" ",$runtime_string_first); + $arrstr .= $runtime_string."#".$line[0]."#".$rsf[0]."|"; + } + + } elsif ($acf eq "avgDailyMeanGWS") { + # Berechnung des Tagesmittelwertes (Temperatur) nach der Vorschrift des deutschen Wetterdienstes + # Berechnung der Tagesmittel aus 24 Stundenwerten, Bezugszeit für einen Tag i.d.R. 23:51 UTC des + # Vortages bis 23:50 UTC, d.h. 00:51 bis 23:50 MEZ + # Wenn mehr als 3 Stundenwerte fehlen -> Berechnung aus den 4 Hauptterminen (00, 06, 12, 18 UTC), + # d.h. 01, 07, 13, 19 MEZ + # https://www.dwd.de/DE/leistungen/klimadatendeutschland/beschreibung_tagesmonatswerte.html + # + my $sum = 0; + my $anz = 0; # Anzahl der Messwerte am Tag + my($t01,$t07,$t13,$t19); # Temperaturen der Haupttermine + my ($bdate,undef) = split(" ",$runtime_string_first); + for my $i (0..23) { + my $bsel = $bdate." ".sprintf("%02d",$i).":00:00"; + my $esel = ($i<23)?$bdate." ".sprintf("%02d",$i).":59:59":$runtime_string_next; + + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$bsel'","'$esel'",$addon); + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + my $val = $sth->fetchrow_array(); + Log3 ($name, 5, "DbRep $name - SQL result: $val") if($val); + $val = DbRep_numval ($val); # nichtnumerische Zeichen eliminieren + if(defined($val) && looks_like_number($val)) { + $sum += $val; + $t01 = $val if($val && $i == 00); # Wert f. Stunde 01 ist zw. letzter Wert vor 01 + $t07 = $val if($val && $i == 06); + $t13 = $val if($val && $i == 12); + $t19 = $val if($val && $i == 18); + $anz++; + } + } + if($anz >= 21) { + $sum = $sum/24; + } elsif ($anz >= 4 && $t01 && $t07 && $t13 && $t19) { + $sum = ($t01+$t07+$t13+$t19)/4; + } else { + $sum = "insufficient values"; + } + + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + $arrstr .= $runtime_string."#".$sum."#".$rsf[0]."_".$rsf[1]."|"; + } else { + my @rsf = split(" ",$runtime_string_first); + $arrstr .= $runtime_string."#".$sum."#".$rsf[0]."|"; + } + + } elsif ($acf eq "avgTimeWeightMean") { + # zeitgewichteten Mittelwert berechnen + # http://massmatics.de/merkzettel/#!837:Gewichteter_Mittelwert + # + # $tsum = timestamp letzter Messpunkt - timestamp erster Messpunkt + # $t1 = timestamp wert1 + # $t2 = timestamp wert2 + # $dt = $t2 - $t1 + # $t1 = $t2 + # ..... + # (val1*$dt/$tsum) + (val2*$dt/$tsum) + .... + (valn*$dt/$tsum) + # + + # gesamte Zeitspanne $tsum zwischen ersten und letzten Datensatz der Zeitscheibe ermitteln + my ($tsum,$tf,$tl,$tn,$to,$dt,$val,$val1); + my $sum = 0; + my $addonf = 'ORDER BY TIMESTAMP ASC LIMIT 1'; + my $addonl = 'ORDER BY TIMESTAMP DESC LIMIT 1'; + my $sqlf = DbRep_createSelectSql($hash,"history","TIMESTAMP",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addonf); + my $sqll = DbRep_createSelectSql($hash,"history","TIMESTAMP",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addonl); + + eval { $tf = ($dbh->selectrow_array($sqlf))[0]; + $tl = ($dbh->selectrow_array($sqll))[0]; + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + + if(!$tf || !$tl) { + # kein Start- und/oder Ende Timestamp in Zeitscheibe vorhanden -> keine Werteberechnung möglich + $sum = "insufficient values"; + } else { + my ($yyyyf, $mmf, $ddf, $hhf, $minf, $secf) = ($tf =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + my ($yyyyl, $mml, $ddl, $hhl, $minl, $secl) = ($tl =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + $tsum = (timelocal($secl, $minl, $hhl, $ddl, $mml-1, $yyyyl-1900))-(timelocal($secf, $minf, $hhf, $ddf, $mmf-1, $yyyyf-1900)); + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addon); + } else { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,undef,undef,$addon); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + + my @twm_array = map { $_->[0]."_ESC_".$_->[1] } @{$sth->fetchall_arrayref()}; + + foreach my $twmrow (@twm_array) { + ($tn,$val) = split("_ESC_",$twmrow); + $val = DbRep_numval ($val); # nichtnumerische Zeichen eliminieren + my ($yyyyt1, $mmt1, $ddt1, $hht1, $mint1, $sect1) = ($tn =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + $tn = timelocal($sect1, $mint1, $hht1, $ddt1, $mmt1-1, $yyyyt1-1900); + if(!$to) { + $val1 = $val; + $to = $tn; + next; + } + $dt = $tn - $to; + $sum += $val1*($dt/$tsum); + $val1 = $val; + $to = $tn; + Log3 ($name, 5, "DbRep $name - data element: $twmrow"); + Log3 ($name, 5, "DbRep $name - time sum: $tsum, delta time: $dt, value: $val1, twm: ".$val1*($dt/$tsum)); + } + } + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + $arrstr .= $runtime_string."#".$sum."#".$rsf[0]."_".$rsf[1]."|"; + } else { + my @rsf = split(" ",$runtime_string_first); + $arrstr .= $runtime_string."#".$sum."#".$rsf[0]."|"; + } + } + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Ergebnisse in Datenbank schreiben + my ($wrt,$irowdone); + if($prop =~ /writeToDB/) { + ($wrt,$irowdone,$err) = DbRep_OutputWriteToDB($name,$device,$reading,$arrstr,$qlf); + if ($err) { + Log3 $hash->{NAME}, 2, "DbRep $name - $err"; + $err = encode_base64($err,""); + return "$name|''|$device|$reading|''|$err|''"; + } + $rt = $rt+$wrt; + } + + # Daten müssen als Einzeiler zurückgegeben werden + $arrstr = encode_base64($arrstr,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$arrstr|$device|$reading|$rt|0|$irowdone"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage averageValue +#################################################################################################### +sub averval_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $arrstr = decode_base64($a[1]); + my $device = $a[2]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[3]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $bt = $a[4]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[5]?decode_base64($a[5]):undef; + my $irowdone = $a[6]; + my $reading_runtime_string; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + my $acf = AttrVal($name, "averageCalcForm", "avgArithmeticMean"); + if($acf eq "avgArithmeticMean") { + $acf = "AM" + } elsif ($acf eq "avgDailyMeanGWS") { + $acf = "DMGWS"; + } elsif ($acf eq "avgTimeWeightMean") { + $acf = "TWM"; + } + + # Readingaufbereitung + readingsBeginUpdate($hash); + + my @arr = split("\\|", $arrstr); + foreach my $row (@arr) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $c = $a[1]; + my $rsf = $a[2]."__"; + + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $rsf.AttrVal($hash->{NAME}, "readingNameMap", "")."__".$runtime_string; + } else { + my $ds = $device."__" if ($device); + my $rds = $reading."__" if ($reading); + $reading_runtime_string = $rsf.$ds.$rds."AVG".$acf."__".$runtime_string; + } + if($acf eq "DMGWS") { + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, looks_like_number($c)?sprintf("%.1f",$c):$c); + } else { + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, $c?sprintf("%.4f",$c):"-"); + } + } + + ReadingsBulkUpdateValue ($hash, "db_lines_processed", $irowdone) if($hash->{LASTCMD} =~ /writeToDB/); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage count +#################################################################################################### +sub count_DoParse($) { + my ($string) = @_; + my ($name,$table,$device,$reading,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my ($dbh,$sql,$sth,$err,$selspec); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$device|$reading|''|$err|$table"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet,$aggregation) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Timearray-Eintrag + my $arrstr; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,$table,"COUNT(*)",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",''); + } else { + $sql = DbRep_createSelectSql($hash,$table,"COUNT(*)",$device,$reading,undef,undef,''); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|$table"; + } + + # DB-Abfrage -> Ergebnis in @arr aufnehmen + my @line = $sth->fetchrow_array(); + + Log3 ($name, 5, "DbRep $name - SQL result: $line[0]") if($line[0]); + + if($aggregation eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + $arrstr .= $runtime_string."#".$line[0]."#".$rsf[0]."_".$rsf[1]."|"; + } else { + my @rsf = split(" ",$runtime_string_first); + $arrstr .= $runtime_string."#".$line[0]."#".$rsf[0]."|"; + } + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Daten müssen als Einzeiler zurückgegeben werden + $arrstr = encode_base64($arrstr,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$arrstr|$device|$reading|$rt|0|$table"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage count +#################################################################################################### +sub count_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $arrstr = decode_base64($a[1]); + my $device = $a[2]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[3]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $bt = $a[4]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[5]?decode_base64($a[5]):undef; + my $table = $a[6]; + my $reading_runtime_string; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + Log3 ($name, 5, "DbRep $name - SQL result decoded: $arrstr") if($arrstr); + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # Readingaufbereitung + readingsBeginUpdate($hash); + + my @arr = split("\\|", $arrstr); + foreach my $row (@arr) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $c = $a[1]; + my $rsf = $a[2]."__"; + + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $rsf.AttrVal($hash->{NAME}, "readingNameMap", "")."__".$runtime_string; + } else { + my $ds = $device."__" if ($device); + my $rds = $reading."__" if ($reading); + $reading_runtime_string = $rsf.$ds.$rds."COUNT_".$table."__".$runtime_string; + } + + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, $c?$c:"-"); + } + + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage maxValue +#################################################################################################### +sub maxval_DoParse($) { + my ($string) = @_; + my ($name,$device,$reading,$prop,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my ($dbh,$sql,$sth,$err); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$device|$reading|''|$err|''"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + my @row_array; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + $runtime_string = encode_base64($runtime_string,""); + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history","VALUE,TIMESTAMP",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'","ORDER BY TIMESTAMP"); + } else { + $sql = DbRep_createSelectSql($hash,"history","VALUE,TIMESTAMP",$device,$reading,undef,undef,"ORDER BY TIMESTAMP"); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + + my @array= map { $runtime_string." ".$_ -> [0]." ".$_ -> [1]."\n" } @{ $sth->fetchall_arrayref() }; + + if(!@array) { + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + @array = ($runtime_string." "."0"." ".$rsf[0]."_".$rsf[1]."\n"); + } else { + my @rsf = split(" ",$runtime_string_first); + @array = ($runtime_string." "."0"." ".$rsf[0]."\n"); + } + } + push(@row_array, @array); + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + Log3 ($name, 5, "DbRep $name -> raw data of row_array result:\n @row_array"); + + #---------- Berechnung Ergebnishash maxValue ------------------------ + my $i = 1; + my %rh = (); + my ($lastruntimestring,$row_max_time,$max_value); + + foreach my $row (@row_array) { + my @a = split("[ \t][ \t]*", $row); + my $runtime_string = decode_base64($a[0]); + $lastruntimestring = $runtime_string if ($i == 1); + my $value = $a[1]; + $a[-1] =~ s/:/-/g if($a[-1]); # substituieren unsupported characters -> siehe fhem.pl + my $timestamp = ($a[-1]&&$a[-2])?$a[-2]."_".$a[-1]:$a[-1]; + + # Leerzeichen am Ende $timestamp entfernen + $timestamp =~ s/\s+$//g; + + # Test auf $value = "numeric" + if (!looks_like_number($value)) { + Log3 ($name, 2, "DbRep $name - ERROR - value isn't numeric in maxValue function. Faulty dataset was \nTIMESTAMP: $timestamp, DEVICE: $device, READING: $reading, VALUE: $value."); + $err = encode_base64("Value isn't numeric. Faulty dataset was - TIMESTAMP: $timestamp, VALUE: $value", ""); + return "$name|''|$device|$reading|''|$err|''"; + } + + Log3 ($name, 5, "DbRep $name - Runtimestring: $runtime_string, DEVICE: $device, READING: $reading, TIMESTAMP: $timestamp, VALUE: $value"); + + if ($runtime_string eq $lastruntimestring) { + if (!defined($max_value) || $value >= $max_value) { + $max_value = $value; + $row_max_time = $timestamp; + $rh{$runtime_string} = $runtime_string."|".$max_value."|".$row_max_time; + } + } else { + # neuer Zeitabschnitt beginnt, ersten Value-Wert erfassen + $lastruntimestring = $runtime_string; + undef $max_value; + if (!defined($max_value) || $value >= $max_value) { + $max_value = $value; + $row_max_time = $timestamp; + $rh{$runtime_string} = $runtime_string."|".$max_value."|".$row_max_time; + } + } + $i++; + } + #--------------------------------------------------------------------------------------------- + + Log3 ($name, 5, "DbRep $name - result of maxValue calculation before encoding:"); + foreach my $key (sort(keys(%rh))) { + Log3 ($name, 5, "runtimestring Key: $key, value: ".$rh{$key}); + } + + # Ergebnishash als Einzeiler zurückgeben bzw. Übergabe Schreibroutine + my $rows = join('§', %rh); + + # Ergebnisse in Datenbank schreiben + my ($wrt,$irowdone); + if($prop =~ /writeToDB/) { + ($wrt,$irowdone,$err) = DbRep_OutputWriteToDB($name,$device,$reading,$rows,"max"); + if ($err) { + Log3 $hash->{NAME}, 2, "DbRep $name - $err"; + $err = encode_base64($err,""); + return "$name|''|$device|$reading|''|$err|''"; + } + $rt = $rt+$wrt; + } + + my $rowlist = encode_base64($rows,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rowlist|$device|$reading|$rt|0|$irowdone"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage maxValue +#################################################################################################### +sub maxval_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $rowlist = decode_base64($a[1]); + my $device = $a[2]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[3]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $bt = $a[4]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[5]?decode_base64($a[5]):undef; + my $irowdone = $a[6]; + my $reading_runtime_string; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + my %rh = split("§", $rowlist); + + Log3 ($name, 5, "DbRep $name - result of maxValue calculation after decoding:"); + foreach my $key (sort(keys(%rh))) { + Log3 ($name, 5, "DbRep $name - runtimestring Key: $key, value: ".$rh{$key}); + } + + # Readingaufbereitung + readingsBeginUpdate($hash); + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + foreach my $key (sort(keys(%rh))) { + my @k = split("\\|",$rh{$key}); + my $rsf = $k[2]."__" if($k[2]); + + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $rsf.AttrVal($hash->{NAME}, "readingNameMap", "")."__".$k[0]; + } else { + my $ds = $device."__" if ($device); + my $rds = $reading."__" if ($reading); + $reading_runtime_string = $rsf.$ds.$rds."MAX__".$k[0]; + } + my $rv = $k[1]; + + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, defined($rv)?sprintf("%.4f",$rv):"-"); + } + + ReadingsBulkUpdateValue ($hash, "db_lines_processed", $irowdone) if($hash->{LASTCMD} =~ /writeToDB/); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage minValue +#################################################################################################### +sub minval_DoParse($) { + my ($string) = @_; + my ($name,$device,$reading,$prop,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my ($dbh,$sql,$sth,$err); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$device|$reading|''|$err|''"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + my @row_array; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + $runtime_string = encode_base64($runtime_string,""); + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history","VALUE,TIMESTAMP",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'","ORDER BY TIMESTAMP"); + } else { + $sql = DbRep_createSelectSql($hash,"history","VALUE,TIMESTAMP",$device,$reading,undef,undef,"ORDER BY TIMESTAMP"); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + + my @array= map { $runtime_string." ".$_ -> [0]." ".$_ -> [1]."\n" } @{ $sth->fetchall_arrayref() }; + + if(!@array) { + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + @array = ($runtime_string." "."0"." ".$rsf[0]."_".$rsf[1]."\n"); + } else { + my @rsf = split(" ",$runtime_string_first); + @array = ($runtime_string." "."0"." ".$rsf[0]."\n"); + } + } + push(@row_array, @array); + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + Log3 ($name, 5, "DbRep $name -> raw data of row_array result:\n @row_array"); + + #---------- Berechnung Ergebnishash minValue ------------------------ + my $i = 1; + my %rh = (); + my $lastruntimestring; + my $row_min_time; + my ($min_value,$value); + + foreach my $row (@row_array) { + my @a = split("[ \t][ \t]*", $row); + my $runtime_string = decode_base64($a[0]); + $lastruntimestring = $runtime_string if ($i == 1); + $value = $a[1]; + $min_value = $a[1] if ($i == 1); + $a[-1] =~ s/:/-/g if($a[-1]); # substituieren unsupported characters -> siehe fhem.pl + my $timestamp = ($a[-1]&&$a[-2])?$a[-2]."_".$a[-1]:$a[-1]; + + # Leerzeichen am Ende $timestamp entfernen + $timestamp =~ s/\s+$//g; + + # Test auf $value = "numeric" + if (!looks_like_number($value)) { + # $a[-1] =~ s/\s+$//g; + Log3 ($name, 2, "DbRep $name - ERROR - value isn't numeric in minValue function. Faulty dataset was \nTIMESTAMP: $timestamp, DEVICE: $device, READING: $reading, VALUE: $value."); + $err = encode_base64("Value isn't numeric. Faulty dataset was - TIMESTAMP: $timestamp, VALUE: $value", ""); + return "$name|''|$device|$reading|''|$err|''"; + } + + Log3 ($name, 5, "DbRep $name - Runtimestring: $runtime_string, DEVICE: $device, READING: $reading, TIMESTAMP: $timestamp, VALUE: $value"); + + $rh{$runtime_string} = $runtime_string."|".$min_value."|".$timestamp if ($i == 1); # minValue des ersten SQL-Statements in hash einfügen + + if ($runtime_string eq $lastruntimestring) { + if (!defined($min_value) || $value < $min_value) { + $min_value = $value; + $row_min_time = $timestamp; + $rh{$runtime_string} = $runtime_string."|".$min_value."|".$row_min_time; + } + } else { + # neuer Zeitabschnitt beginnt, ersten Value-Wert erfassen + $lastruntimestring = $runtime_string; + $min_value = $value; + $row_min_time = $timestamp; + $rh{$runtime_string} = $runtime_string."|".$min_value."|".$row_min_time; + } + $i++; + } + #--------------------------------------------------------------------------------------------- + + Log3 ($name, 5, "DbRep $name - result of minValue calculation before encoding:"); + foreach my $key (sort(keys(%rh))) { + Log3 ($name, 5, "runtimestring Key: $key, value: ".$rh{$key}); + } + + # Ergebnishash als Einzeiler zurückgeben bzw. an Schreibroutine übergeben + my $rows = join('§', %rh); + + # Ergebnisse in Datenbank schreiben + my ($wrt,$irowdone); + if($prop =~ /writeToDB/) { + ($wrt,$irowdone,$err) = DbRep_OutputWriteToDB($name,$device,$reading,$rows,"min"); + if ($err) { + Log3 $hash->{NAME}, 2, "DbRep $name - $err"; + $err = encode_base64($err,""); + return "$name|''|$device|$reading|''|$err|''"; + } + $rt = $rt+$wrt; + } + + my $rowlist = encode_base64($rows,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rowlist|$device|$reading|$rt|0|$irowdone"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage minValue +#################################################################################################### +sub minval_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $rowlist = decode_base64($a[1]); + my $device = $a[2]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[3]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $bt = $a[4]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[5]?decode_base64($a[5]):undef; + my $irowdone = $a[6]; + my $reading_runtime_string; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + my %rh = split("§", $rowlist); + + Log3 ($name, 5, "DbRep $name - result of minValue calculation after decoding:"); + foreach my $key (sort(keys(%rh))) { + Log3 ($name, 5, "DbRep $name - runtimestring Key: $key, value: ".$rh{$key}); + } + + # Readingaufbereitung + readingsBeginUpdate($hash); + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + foreach my $key (sort(keys(%rh))) { + my @k = split("\\|",$rh{$key}); + my $rsf = $k[2]."__" if($k[2]); + + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $rsf.AttrVal($hash->{NAME}, "readingNameMap", "")."__".$k[0]; + } else { + my $ds = $device."__" if ($device); + my $rds = $reading."__" if ($reading); + $reading_runtime_string = $rsf.$ds.$rds."MIN__".$k[0]; + } + my $rv = $k[1]; + + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, defined($rv)?sprintf("%.4f",$rv):"-"); + } + + ReadingsBulkUpdateValue ($hash, "db_lines_processed", $irowdone) if($hash->{LASTCMD} =~ /writeToDB/); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage diffValue +#################################################################################################### +sub diffval_DoParse($) { + my ($string) = @_; + my ($name,$device,$reading,$prop,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbmodel = $dbloghash->{MODEL}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my ($dbh,$sql,$sth,$err,$selspec); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$device|$reading|''|''|''|$err|''"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + #vorbereiten der DB-Abfrage, DB-Modell-abhaengig + if($dbmodel eq "MYSQL") { + $selspec = "TIMESTAMP,VALUE, if(VALUE-\@V < 0 OR \@RB = 1 , \@diff:= 0, \@diff:= VALUE-\@V ) as DIFF, \@V:= VALUE as VALUEBEFORE, \@RB:= '0' as RBIT "; + } else { + $selspec = "TIMESTAMP,VALUE"; + } + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + my @row_array; + my @array; + + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + $runtime_string = encode_base64($runtime_string,""); + + if($dbmodel eq "MYSQL") { + eval {$dbh->do("set \@V:= 0, \@diff:= 0, \@diffTotal:= 0, \@RB:= 1;");}; # @\RB = Resetbit wenn neues Selektionsintervall beginnt + } + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|''|''|$err|''"; + } + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",''); + } else { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,undef,undef,''); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|''|''|$err|''"; + + } else { + if($dbmodel eq "MYSQL") { + @array = map { $runtime_string." ".$_ -> [0]." ".$_ -> [1]." ".$_ -> [2]."\n" } @{ $sth->fetchall_arrayref() }; + } else { + @array = map { $runtime_string." ".$_ -> [0]." ".$_ -> [1]."\n" } @{ $sth->fetchall_arrayref() }; + + if (@array) { + my @sp; + my $dse = 0; + my $vold; + my @sqlite_array; + foreach my $row (@array) { + @sp = split("[ \t][ \t]*", $row, 4); + my $runtime_string = $sp[0]; + my $timestamp = $sp[2]?$sp[1]." ".$sp[2]:$sp[1]; + my $vnew = $sp[3]; + $vnew =~ tr/\n//d; + + $dse = ($vold && (($vnew-$vold) > 0))?($vnew-$vold):0; + @sp = $runtime_string." ".$timestamp." ".$vnew." ".$dse."\n"; + $vold = $vnew; + push(@sqlite_array, @sp); + } + @array = @sqlite_array; + } + } + + if(!@array) { + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + @array = ($runtime_string." ".$rsf[0]."_".$rsf[1]."\n"); + } else { + my @rsf = split(" ",$runtime_string_first); + @array = ($runtime_string." ".$rsf[0]."\n"); + } + } + push(@row_array, @array); + } + } + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + $dbh->disconnect; + + Log3 ($name, 5, "DbRep $name - raw data of row_array result:\n @row_array"); + + my $difflimit = AttrVal($name, "diffAccept", "20"); # legt fest, bis zu welchem Wert Differenzen akzeptiert werden (Ausreißer eliminieren) + + # Berechnung diffValue aus Selektionshash + my %rh = (); # Ergebnishash, wird alle Ergebniszeilen enthalten + my %ch = (); # counthash, enthält die Anzahl der verarbeiteten Datasets pro runtime_string + my $lastruntimestring; + my $i = 1; + my $lval; # immer der letzte Wert von $value + my $rslval; # runtimestring von lval + my $uediff; # Übertragsdifferenz (Differenz zwischen letzten Wert einer Aggregationsperiode und dem ersten Wert der Folgeperiode) + my $diff_current; # Differenzwert des aktuellen Datasets + my $diff_before; # Differenzwert vorheriger Datensatz + my $rejectstr; # String der ignorierten Differenzsätze + my $diff_total; # Summenwert aller berücksichtigten Teildifferenzen + my $max = ($#row_array)+1; # Anzahl aller Listenelemente + + Log3 ($name, 5, "DbRep $name - data of row_array result assigned to fields:\n"); + + foreach my $row (@row_array) { + my @a = split("[ \t][ \t]*", $row, 6); + my $runtime_string = decode_base64($a[0]); + $lastruntimestring = $runtime_string if ($i == 1); + my $timestamp = $a[2]?$a[1]."_".$a[2]:$a[1]; + my $value = $a[3]?$a[3]:0; + my $diff = $a[4]?sprintf("%.4f",$a[4]):0; + +# if ($uediff) { +# $diff = $diff + $uediff; +# Log3 ($name, 4, "DbRep $name - balance difference of $uediff between $rslval and $runtime_string"); +# $uediff = 0; +# } + + # Leerzeichen am Ende $timestamp entfernen + $timestamp =~ s/\s+$//g; + + # Test auf $value = "numeric" + if (!looks_like_number($value)) { + $a[3] =~ s/\s+$//g; + Log3 ($name, 2, "DbRep $name - ERROR - value isn't numeric in diffValue function. Faulty dataset was \nTIMESTAMP: $timestamp, DEVICE: $device, READING: $reading, VALUE: $value."); + $err = encode_base64("Value isn't numeric. Faulty dataset was - TIMESTAMP: $timestamp, VALUE: $value", ""); + return "$name|''|$device|$reading|''|''|''|$err|''"; + } + + Log3 ($name, 5, "DbRep $name - Runtimestring: $runtime_string, DEVICE: $device, READING: $reading, \nTIMESTAMP: $timestamp, VALUE: $value, DIFF: $diff"); + + # String ignorierter Zeilen erzeugen + $diff_current = $timestamp." ".$diff; + if($diff > $difflimit) { + $rejectstr .= $diff_before." -> ".$diff_current."\n"; + } + $diff_before = $diff_current; + + # Ergebnishash erzeugen + if ($runtime_string eq $lastruntimestring) { + if ($i == 1) { + $diff_total = $diff?$diff:0 if($diff <= $difflimit); + $rh{$runtime_string} = $runtime_string."|".$diff_total."|".$timestamp; + $ch{$runtime_string} = 1 if($value); + $lval = $value; + $rslval = $runtime_string; + } + + if ($diff) { + if($diff <= $difflimit) { + $diff_total = $diff_total+$diff; + } + $rh{$runtime_string} = $runtime_string."|".$diff_total."|".$timestamp; + $ch{$runtime_string}++ if($value && $i > 1); + $lval = $value; + $rslval = $runtime_string; + } + } else { + # neuer Zeitabschnitt beginnt, ersten Value-Wert erfassen und Übertragsdifferenz bilden + $lastruntimestring = $runtime_string; + $i = 1; + + $uediff = $value - $lval if($value > $lval); + $diff = $uediff; + $lval = $value if($value); # Übetrag über Perioden mit value = 0 hinweg ! + $rslval = $runtime_string; + Log3 ($name, 4, "DbRep $name - balance difference of $uediff between $rslval and $runtime_string"); + + + $diff_total = $diff?$diff:0 if($diff <= $difflimit); + $rh{$runtime_string} = $runtime_string."|".$diff_total."|".$timestamp; + $ch{$runtime_string} = 1 if($value); + + $uediff = 0; + } + $i++; + } + + Log3 ($name, 4, "DbRep $name - result of diffValue calculation before encoding:"); + foreach my $key (sort(keys(%rh))) { + Log3 ($name, 4, "runtimestring Key: $key, value: ".$rh{$key}); + } + + my $ncp = DbRep_calcount($hash,\%ch); + + my ($ncps,$ncpslist); + if(%$ncp) { + Log3 ($name, 3, "DbRep $name - time/aggregation periods containing only one dataset -> no diffValue calc was possible in period:"); + foreach my $key (sort(keys%{$ncp})) { + Log3 ($name, 3, $key) ; + } + $ncps = join('§', %$ncp); + $ncpslist = encode_base64($ncps,""); + } + + # Ergebnishash als Einzeiler zurückgeben + # ignorierte Zeilen ($diff > $difflimit) + my $rowsrej = encode_base64($rejectstr,"") if($rejectstr); + + # Ergebnishash + my $rows = join('§', %rh); + + # Ergebnisse in Datenbank schreiben + my ($wrt,$irowdone); + if($prop =~ /writeToDB/) { + ($wrt,$irowdone,$err) = DbRep_OutputWriteToDB($name,$device,$reading,$rows,"diff"); + if ($err) { + Log3 $hash->{NAME}, 2, "DbRep $name - $err"; + $err = encode_base64($err,""); + return "$name|''|$device|$reading|''|''|''|$err|''"; + } + $rt = $rt+$wrt; + } + + my $rowlist = encode_base64($rows,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rowlist|$device|$reading|$rt|$rowsrej|$ncpslist|0|$irowdone"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage diffValue +#################################################################################################### +sub diffval_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $rowlist = decode_base64($a[1]); + my $device = $a[2]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[3]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $bt = $a[4]; + my ($rt,$brt) = split(",", $bt); + my $rowsrej = $a[5]?decode_base64($a[5]):undef; # String von Datensätzen die nicht berücksichtigt wurden (diff Schwellenwert Überschreitung) + my $ncpslist = decode_base64($a[6]); # Hash von Perioden die nicht kalkuliert werden konnten "no calc in period" + my $err = $a[7]?decode_base64($a[7]):undef; + my $irowdone = $a[8]; + my $reading_runtime_string; + my $difflimit = AttrVal($name, "diffAccept", "20"); # legt fest, bis zu welchem Wert Differenzen akzeptoert werden (Ausreißer eliminieren)AttrVal($name, "diffAccept", "20"); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # Auswertung hashes für state-Warning + $rowsrej =~ s/_/ /g; + Log3 ($name, 3, "DbRep $name -> data ignored while calc diffValue due to threshold overrun (diffAccept = $difflimit): \n$rowsrej") + if($rowsrej); + $rowsrej =~ s/\n/ \|\| /g; + + my %ncp = split("§", $ncpslist); + my $ncpstr; + if(%ncp) { + foreach my $ncpkey (sort(keys(%ncp))) { + $ncpstr .= $ncpkey." || "; + } + } + + # Readingaufbereitung + my %rh = split("§", $rowlist); + + Log3 ($name, 4, "DbRep $name - result of diffValue calculation after decoding:"); + foreach my $key (sort(keys(%rh))) { + Log3 ($name, 4, "DbRep $name - runtimestring Key: $key, value: ".$rh{$key}); + } + + readingsBeginUpdate($hash); + + foreach my $key (sort(keys(%rh))) { + my @k = split("\\|",$rh{$key}); + my $rts = $k[2]."__"; + $rts =~ s/:/-/g; # substituieren unsupported characters -> siehe fhem.pl + + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $rts.AttrVal($hash->{NAME}, "readingNameMap", "")."__".$k[0]; + } else { + my $ds = $device."__" if ($device); + my $rds = $reading."__" if ($reading); + $reading_runtime_string = $rts.$ds.$rds."DIFF__".$k[0]; + } + my $rv = $k[1]; + + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, $rv?sprintf("%.4f",$rv):"-"); + + } + + ReadingsBulkUpdateValue ($hash, "db_lines_processed", $irowdone) if($hash->{LASTCMD} =~ /writeToDB/); + ReadingsBulkUpdateValue ($hash, "diff_overrun_limit_".$difflimit, $rowsrej) if($rowsrej); + ReadingsBulkUpdateValue ($hash, "less_data_in_period", $ncpstr) if($ncpstr); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,($ncpstr||$rowsrej)?"Warning":"done"); + + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage sumValue +#################################################################################################### +sub sumval_DoParse($) { + my ($string) = @_; + my ($name,$device,$reading,$prop,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my ($dbh,$sql,$sth,$err,$selspec); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$device|$reading|''|$err|''"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + #vorbereiten der DB-Abfrage, DB-Modell-abhaengig + if ($dbloghash->{MODEL} eq "POSTGRESQL") { + $selspec = "SUM(VALUE::numeric)"; + } elsif ($dbloghash->{MODEL} eq "MYSQL") { + $selspec = "SUM(VALUE)"; + } elsif ($dbloghash->{MODEL} eq "SQLITE") { + $selspec = "SUM(VALUE)"; + } else { + $selspec = "SUM(VALUE)"; + } + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + my $arrstr; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",''); + } else { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,undef,undef,''); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$device|$reading|''|$err|''"; + } + + # DB-Abfrage -> Ergebnis in @arr aufnehmen + my @line = $sth->fetchrow_array(); + + Log3 ($name, 5, "DbRep $name - SQL result: $line[0]") if($line[0]); + + if(AttrVal($name, "aggregation", "") eq "hour") { + my @rsf = split(/[" "\|":"]/,$runtime_string_first); + $arrstr .= $runtime_string."#".$line[0]."#".$rsf[0]."_".$rsf[1]."|"; + } else { + my @rsf = split(" ",$runtime_string_first); + $arrstr .= $runtime_string."#".$line[0]."#".$rsf[0]."|"; + } + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Ergebnisse in Datenbank schreiben + my ($wrt,$irowdone); + if($prop =~ /writeToDB/) { + ($wrt,$irowdone,$err) = DbRep_OutputWriteToDB($name,$device,$reading,$arrstr,"sum"); + if ($err) { + Log3 $hash->{NAME}, 2, "DbRep $name - $err"; + $err = encode_base64($err,""); + return "$name|''|$device|$reading|''|$err|''"; + } + $rt = $rt+$wrt; + } + + # Daten müssen als Einzeiler zurückgegeben werden + $arrstr = encode_base64($arrstr,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$arrstr|$device|$reading|$rt|0|$irowdone"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage sumValue +#################################################################################################### +sub sumval_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $arrstr = decode_base64($a[1]); + my $device = $a[2]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[3]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $bt = $a[4]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[5]?decode_base64($a[5]):undef; + my $irowdone = $a[6]; + my $reading_runtime_string; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # Readingaufbereitung + readingsBeginUpdate($hash); + + my @arr = split("\\|", $arrstr); + foreach my $row (@arr) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $c = $a[1]; + my $rsf = $a[2]."__"; + + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $rsf.AttrVal($hash->{NAME}, "readingNameMap", "")."__".$runtime_string; + } else { + my $ds = $device."__" if ($device); + my $rds = $reading."__" if ($reading); + $reading_runtime_string = $rsf.$ds.$rds."SUM__".$runtime_string; + } + + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, $c?sprintf("%.4f",$c):"-"); + } + + ReadingsBulkUpdateValue ($hash, "db_lines_processed", $irowdone) if($hash->{LASTCMD} =~ /writeToDB/); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierendes DB delete +#################################################################################################### +sub del_DoParse($) { + my ($string) = @_; + my ($name,$table,$device,$reading,$runtime_string_first,$runtime_string_next) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my ($dbh,$sql,$sth,$err,$rows); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err|''|''|''"; + } + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # SQL zusammenstellen für DB-Operation + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createDeleteSql($hash,$table,$device,$reading,$runtime_string_first,$runtime_string_next,''); + } else { + $sql = DbRep_createDeleteSql($hash,$table,$device,$reading,undef,undef,''); + } + + $sth = $dbh->prepare($sql); + + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + eval {$sth->execute();}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err|''|''|''"; + } + + $rows = $sth->rows; + $dbh->commit() if(!$dbh->{AutoCommit}); + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + Log3 ($name, 5, "DbRep $name - Number of deleted rows: $rows"); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rows|$rt|0|$table|$device|$reading"; +} + +#################################################################################################### +# Auswertungsroutine DB delete +#################################################################################################### +sub del_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $rows = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $table = $a[4]; + my $device = $a[5]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[6]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $erread; + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "delEntries"); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + my ($reading_runtime_string, $ds, $rds); + if (AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = AttrVal($hash->{NAME}, "readingNameMap", "")."--DELETED_ROWS--"; + } else { + $ds = $device."--" if ($device && $table ne "current"); + $rds = $reading."--" if ($reading && $table ne "current"); + $reading_runtime_string = $ds.$rds."--DELETED_ROWS_".uc($table)."--"; + } + + readingsBeginUpdate($hash); + + ReadingsBulkUpdateValue ($hash, $reading_runtime_string, $rows); + + $rows = ($table eq "current")?$rows:$ds.$rds.$rows; + Log3 ($name, 3, "DbRep $name - Entries of $hash->{DATABASE}.$table deleted: $rows"); + + my $state = $erread?$erread:"done"; + ReadingsBulkUpdateTimeState($hash,$brt,$rt,$state); + + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierendes DB insert +#################################################################################################### +sub insert_Push($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my ($err,$sth); + + # Background-Startzeit + my $bst = [gettimeofday]; + + my $dbh; + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err"; + } + + # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK + my ($usepkh,$usepkc,$pkh,$pkc) = DbRep_checkUsePK($hash,$dbloghash,$dbh); + + my $i_timestamp = $hash->{HELPER}{I_TIMESTAMP}; + my $i_device = $hash->{HELPER}{I_DEVICE}; + my $i_type = $hash->{HELPER}{I_TYPE}; + my $i_event = $hash->{HELPER}{I_EVENT}; + my $i_reading = $hash->{HELPER}{I_READING}; + my $i_value = $hash->{HELPER}{I_VALUE}; + my $i_unit = $hash->{HELPER}{I_UNIT} ? $hash->{HELPER}{I_UNIT} : " "; + + # SQL zusammenstellen für DB-Operation + Log3 ($name, 5, "DbRep $name -> data to insert Timestamp: $i_timestamp, Device: $i_device, Type: $i_type, Event: $i_event, Reading: $i_reading, Value: $i_value, Unit: $i_unit"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + # insert history mit/ohne primary key + if ($usepkh && $dbloghash->{MODEL} eq 'MYSQL') { + eval { $sth = $dbh->prepare("INSERT IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'SQLITE') { + eval { $sth = $dbh->prepare("INSERT OR IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'POSTGRESQL') { + eval { $sth = $dbh->prepare("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; + } else { + eval { $sth = $dbh->prepare("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect(); + return "$name|''|''|$err"; + } + + $dbh->begin_work(); + + eval {$sth->execute($i_timestamp, $i_device, $i_type, $i_event, $i_reading, $i_value, $i_unit);}; + + my $irow; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Insert new dataset into database failed".($usepkh?" (possible PK violation) ":": ")."$@"); + $dbh->rollback(); + $dbh->disconnect(); + return "$name|''|''|$err"; + } else { + $dbh->commit(); + $irow = $sth->rows; + $dbh->disconnect(); + } + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$irow|$rt|0"; +} + +#################################################################################################### +# Auswertungsroutine DB insert +#################################################################################################### +sub insert_Done($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $irow = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + + my $i_timestamp = delete $hash->{HELPER}{I_TIMESTAMP}; + my $i_device = delete $hash->{HELPER}{I_DEVICE}; + my $i_type = delete $hash->{HELPER}{I_TYPE}; + my $i_event = delete $hash->{HELPER}{I_EVENT}; + my $i_reading = delete $hash->{HELPER}{I_READING}; + my $i_value = delete $hash->{HELPER}{I_VALUE}; + my $i_unit = delete $hash->{HELPER}{I_UNIT}; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + + ReadingsBulkUpdateValue ($hash, "number_lines_inserted", $irow); + ReadingsBulkUpdateValue ($hash, "data_inserted", $i_timestamp.", ".$i_device.", ".$i_type.", ".$i_event.", ".$i_reading.", ".$i_value.", ".$i_unit); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + + readingsEndUpdate($hash, 1); + + Log3 ($name, 5, "DbRep $name - Inserted into database $hash->{DATABASE} table 'history': Timestamp: $i_timestamp, Device: $i_device, Type: $i_type, Event: $i_event, Reading: $i_reading, Value: $i_value, Unit: $i_unit"); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# Current-Tabelle mit Device,Reading Kombinationen aus history auffüllen +#################################################################################################### +sub currentfillup_Push($) { + my ($string) = @_; + my ($name,$device,$reading,$runtime_string_first,$runtime_string_next) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my ($err,$sth,$sql,$devs,$danz,$ranz); + + # Background-Startzeit + my $bst = [gettimeofday]; + + my $dbh; + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err|''|''"; + } + + # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK + my ($usepkh,$usepkc,$pkh,$pkc) = DbRep_checkUsePK($hash,$dbloghash,$dbh); + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + ($devs,$danz,$reading,$ranz) = DbRep_specsForSql($hash,$device,$reading); + + # SQL-Startzeit + my $st = [gettimeofday]; + + # insert history mit/ohne primary key + # SQL zusammenstellen für DB-Operation + if ($usepkc && $dbloghash->{MODEL} eq 'MYSQL') { + $sql = "INSERT IGNORE INTO current (TIMESTAMP,DEVICE,READING) SELECT timestamp,device,reading FROM history where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if ($IsTimeSet) { + $sql .= "TIMESTAMP >= '$runtime_string_first' AND TIMESTAMP < '$runtime_string_next' "; + } else { + $sql .= "1 "; + } + $sql .= "group by timestamp,device,reading ;"; + + } elsif ($usepkc && $dbloghash->{MODEL} eq 'SQLITE') { + $sql = "INSERT OR IGNORE INTO current (TIMESTAMP,DEVICE,READING) SELECT timestamp,device,reading FROM history where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if ($IsTimeSet) { + $sql .= "TIMESTAMP >= '$runtime_string_first' AND TIMESTAMP < '$runtime_string_next' "; + } else { + $sql .= "1 "; + } + $sql .= "group by timestamp,device,reading ;"; + + } elsif ($usepkc && $dbloghash->{MODEL} eq 'POSTGRESQL') { + $sql = "INSERT INTO current (DEVICE,TIMESTAMP,READING) SELECT device, (array_agg(timestamp ORDER BY reading ASC))[1], reading FROM history where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if ($IsTimeSet) { + $sql .= "TIMESTAMP >= '$runtime_string_first' AND TIMESTAMP < '$runtime_string_next' "; + } else { + $sql .= "true "; + } + $sql .= "group by device,reading ON CONFLICT ($pkc) DO NOTHING; "; + + } else { + if($dbloghash->{MODEL} ne 'POSTGRESQL') { + # MySQL und SQLite + $sql = "INSERT INTO current (TIMESTAMP,DEVICE,READING) SELECT timestamp,device,reading FROM history where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if ($IsTimeSet) { + $sql .= "TIMESTAMP >= '$runtime_string_first' AND TIMESTAMP < '$runtime_string_next' "; + } else { + $sql .= "1 "; + } + $sql .= "group by device,reading ;"; + } else { + # PostgreSQL + $sql = "INSERT INTO current (DEVICE,TIMESTAMP,READING) SELECT device, (array_agg(timestamp ORDER BY reading ASC))[1], reading FROM history where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if ($IsTimeSet) { + $sql .= "TIMESTAMP >= '$runtime_string_first' AND TIMESTAMP < '$runtime_string_next' "; + } else { + $sql .= "true "; + } + $sql .= "group by device,reading;"; + } + } + + # Log SQL Statement + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval { $sth = $dbh->prepare($sql); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect(); + return "$name|''|''|$err|''|''"; + } + + + my $irow; + $dbh->begin_work(); + + eval {$sth->execute();}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Insert new dataset into database failed".($usepkh?" (possible PK violation) ":": ")."$@"); + $dbh->rollback(); + $dbh->disconnect(); + return "$name|''|''|$err|''|''"; + } else { + $dbh->commit(); + $irow = $sth->rows; + $dbh->disconnect(); + } + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$irow|$rt|0|$device|$reading"; +} + +#################################################################################################### +# Auswertungsroutine Current-Tabelle auffüllen +#################################################################################################### +sub currentfillup_Done($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $irow = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $device = $a[4]; + my $reading = $a[5]; + + undef $device if ($device =~ m(^%$)); + undef $reading if ($reading =~ m(^%$)); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + my $rowstr; + $rowstr = $irow if(!$device && !$reading); + $rowstr = $irow." - limited by device: ".$device if($device && !$reading); + $rowstr = $irow." - limited by reading: ".$reading if(!$device && $reading); + $rowstr = $irow." - limited by device: ".$device." and reading: ".$reading if($device && $reading); + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue($hash, "number_lines_inserted", $rowstr); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + Log3 ($name, 3, "DbRep $name - Table '$hash->{DATABASE}'.'current' filled up with rows: $rowstr"); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierendes DB deviceRename / readingRename +#################################################################################################### +sub change_Push($) { + my ($string) = @_; + my ($name,$device,$reading,$runtime_string_first,$runtime_string_next) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $table = "history"; + my ($dbh,$err,$sql); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err"; + } + + my $renmode = $hash->{HELPER}{RENMODE}; + + # SQL-Startzeit + my $st = [gettimeofday]; + + my ($sth,$old,$new); + eval { $dbh->begin_work() if($dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein + if ($@) { + Log3($name, 2, "DbRep $name -> Error start transaction - $@"); + } + + if ($renmode eq "devren") { + $old = delete $hash->{HELPER}{OLDDEV}; + $new = delete $hash->{HELPER}{NEWDEV}; + + # SQL zusammenstellen für DB-Operation + Log3 ($name, 5, "DbRep $name -> Rename old device name \"$old\" to new device name \"$new\" in database $dblogname "); + + # prepare DB operation + $old =~ s/'/''/g; # escape ' with '' + $new =~ s/'/''/g; # escape ' with '' + $sql = "UPDATE history SET TIMESTAMP=TIMESTAMP,DEVICE='$new' WHERE DEVICE='$old'; "; + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + $sth = $dbh->prepare($sql) ; + + } elsif ($renmode eq "readren") { + $old = delete $hash->{HELPER}{OLDREAD}; + $new = delete $hash->{HELPER}{NEWREAD}; + + # SQL zusammenstellen für DB-Operation + Log3 ($name, 5, "DbRep $name -> Rename old reading name \"$old\" to new reading name \"$new\" in database $dblogname "); + + # prepare DB operation + $old =~ s/'/''/g; # escape ' with '' + $new =~ s/'/''/g; # escape ' with '' + $sql = "UPDATE history SET TIMESTAMP=TIMESTAMP,READING='$new' WHERE READING='$old'; "; + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + $sth = $dbh->prepare($sql) ; + + } + + $old =~ s/''/'/g; # escape back + $new =~ s/''/'/g; # escape back + + my $urow; + eval { $sth->execute(); }; + if ($@) { + $err = encode_base64($@,""); + my $m = ($renmode eq "devren")?"device":"reading"; + Log3 ($name, 2, "DbRep $name - Failed to rename old $m name \"$old\" to new $m name \"$new\": $@"); + $dbh->rollback() if(!$dbh->{AutoCommit}); + $dbh->disconnect(); + return "$name|''|''|$err"; + } else { + $dbh->commit() if(!$dbh->{AutoCommit}); + $urow = $sth->rows; + $dbh->disconnect(); + } + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$urow|$rt|0|$old|$new"; +} + +#################################################################################################### +# nichtblockierendes DB deviceRename / readingRename +#################################################################################################### +sub changeval_Push($) { + my ($string) = @_; + my ($name,$device,$reading,$runtime_string_first,$runtime_string_next,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $table = "history"; + my $complex = $hash->{HELPER}{COMPLEX}; # einfache oder komplexe Werteersetzung + my ($dbh,$err,$sql,$urow); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err"; + } + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my ($sth,$old,$new); + eval { $dbh->begin_work() if($dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein + if ($@) { + Log3($name, 2, "DbRep $name -> Error start transaction - $@"); + } + + if (!$complex) { + $old = delete $hash->{HELPER}{OLDVAL}; + $new = delete $hash->{HELPER}{NEWVAL}; + + # SQL zusammenstellen für DB-Operation + Log3 ($name, 5, "DbRep $name -> Change old value \"$old\" to new value \"$new\" in database $dblogname "); + + # prepare DB operation + $old =~ s/'/''/g; # escape ' with '' + $new =~ s/'/''/g; # escape ' with '' + + # SQL zusammenstellen für DB-Update + my $addon = $old =~ /%/?"WHERE VALUE LIKE '$old'":"WHERE VALUE='$old'"; + if ($IsTimeSet) { + $sql = DbRep_createUpdateSql($hash,$table,"TIMESTAMP=TIMESTAMP,VALUE='$new' $addon",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",''); + } else { + $sql = DbRep_createUpdateSql($hash,$table,"TIMESTAMP=TIMESTAMP,VALUE='$new' $addon",$device,$reading,undef,undef,''); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + $sth = $dbh->prepare($sql) ; + + $old =~ s/''/'/g; # escape back + $new =~ s/''/'/g; # escape back + + eval { $sth->execute(); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Failed to change old value \"$old\" to new value \"$new\": $@"); + $dbh->rollback() if(!$dbh->{AutoCommit}); + $dbh->disconnect(); + return "$name|''|''|$err"; + } else { + $dbh->commit() if(!$dbh->{AutoCommit}); + $urow = $sth->rows; + } + + } else { + $old = delete $hash->{HELPER}{OLDVAL}; + $new = delete $hash->{HELPER}{NEWVAL}; + $old =~ s/'/''/g; # escape ' with '' + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + $urow = 0; + my $selspec = "DEVICE,READING,TIMESTAMP,VALUE,UNIT"; + my $addon = $old =~ /%/?"AND VALUE LIKE '$old'":"AND VALUE='$old'"; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addon); + } else { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,undef,undef,$addon); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err"; + } + + no warnings 'uninitialized'; + # DEVICE _ESC_ READING _ESC_ DATE _ESC_ TIME _ESC_ VALUE _ESC_ UNIT + my @row_array = map { $_->[0]."_ESC_".$_->[1]."_ESC_".($_->[2] =~ s/ /_ESC_/r)."_ESC_".$_->[3]."_ESC_".$_->[4]."\n" } @{$sth->fetchall_arrayref()}; + use warnings; + + Log3 ($name, 4, "DbRep $name - Now change values of selected array ... "); + + foreach my $upd (@row_array) { + # für jeden selektierten (zu ändernden) Datensatz Userfunktion anwenden und updaten + my ($device,$reading,$date,$time,$value,$unit) = ($upd =~ /^(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)$/); + + my $oval = $value; # Selektkriterium für Update alter Valuewert + my $VALUE = $value; + my $UNIT = $unit; + eval $new; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err"; + } + + $value = $VALUE if(defined $VALUE); + $unit = $UNIT if(defined $UNIT); + # Daten auf maximale Länge beschneiden (DbLog-Funktion !) + (undef,undef,undef,undef,$value,$unit) = DbLog_cutCol($hash->{dbloghash},"1","1","1","1",$value,$unit); + + $value =~ s/'/''/g; # escape ' with '' + $unit =~ s/'/''/g; # escape ' with '' + + # SQL zusammenstellen für DB-Update + $sql = "UPDATE history SET TIMESTAMP=TIMESTAMP,VALUE='$value',UNIT='$unit' WHERE TIMESTAMP = '$date $time' AND DEVICE = '$device' AND READING = '$reading' AND VALUE='$oval'"; + Log3 ($name, 5, "DbRep $name - SQL execute: $sql"); + $sth = $dbh->prepare($sql) ; + + $value =~ s/''/'/g; # escape back + $unit =~ s/''/'/g; # escape back + + eval { $sth->execute(); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Failed to change old value \"$old\" to new value \"$new\": $@"); + $dbh->rollback() if(!$dbh->{AutoCommit}); + $dbh->disconnect(); + return "$name|''|''|$err"; + } else { + $dbh->commit() if(!$dbh->{AutoCommit}); + $urow++; + } + } + } + } + + $dbh->disconnect(); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$urow|$rt|0|$old|$new"; +} + +#################################################################################################### +# Auswertungsroutine DB deviceRename/readingRename/changeValue +#################################################################################################### +sub change_Done($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $urow = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $old = $a[4]; + my $new = $a[5]; + + my $renmode = delete $hash->{HELPER}{RENMODE}; + + # Befehl nach Procedure ausführen + my $erread = DbRep_afterproc($hash, $renmode); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, "number_lines_updated", $urow); + + if($renmode eq "devren") { + ReadingsBulkUpdateValue ($hash, "device_renamed", "old: ".$old." to new: ".$new) if($urow != 0); + ReadingsBulkUpdateValue ($hash, "device_not_renamed", "Warning - old: ".$old." not found, not renamed to new: ".$new) + if($urow == 0); + } + if($renmode eq "readren") { + ReadingsBulkUpdateValue ($hash, "reading_renamed", "old: ".$old." to new: ".$new) if($urow != 0); + ReadingsBulkUpdateValue ($hash, "reading_not_renamed", "Warning - old: ".$old." not found, not renamed to new: ".$new) + if ($urow == 0); + } + if($renmode eq "changeval") { + ReadingsBulkUpdateValue ($hash, "value_changed", "old: ".$old." to new: ".$new) if($urow != 0); + ReadingsBulkUpdateValue ($hash, "value_not_changed", "Warning - old: ".$old." not found, not changed to new: ".$new) + if ($urow == 0); + } + + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + if ($urow != 0) { + Log3 ($name, 3, "DbRep ".(($hash->{ROLE} eq "Agent")?"Agent ":"")."$name - DEVICE renamed in \"$hash->{DATABASE}\", old: \"$old\", new: \"$new\", number: $urow ") if($renmode eq "devren"); + Log3 ($name, 3, "DbRep ".(($hash->{ROLE} eq "Agent")?"Agent ":"")."$name - READING renamed in \"$hash->{DATABASE}\", old: \"$old\", new: \"$new\", number: $urow ") if($renmode eq "readren"); + Log3 ($name, 3, "DbRep ".(($hash->{ROLE} eq "Agent")?"Agent ":"")."$name - VALUE changed in \"$hash->{DATABASE}\", old: \"$old\", new: \"$new\", number: $urow ") if($renmode eq "changeval"); + } else { + Log3 ($name, 3, "DbRep ".(($hash->{ROLE} eq "Agent")?"Agent ":"")."$name - WARNING - old device \"$old\" was not found in database \"$hash->{DATABASE}\" ") if($renmode eq "devren"); + Log3 ($name, 3, "DbRep ".(($hash->{ROLE} eq "Agent")?"Agent ":"")."$name - WARNING - old reading \"$old\" was not found in database \"$hash->{DATABASE}\" ") if($renmode eq "readren"); + Log3 ($name, 3, "DbRep ".(($hash->{ROLE} eq "Agent")?"Agent ":"")."$name - WARNING - old value \"$old\" not found in database \"$hash->{DATABASE}\" ") if($renmode eq "changeval"); + } + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage fetchrows +#################################################################################################### +sub fetchrows_DoParse($) { + my ($string) = @_; + my ($name,$table,$device,$reading,$runtime_string_first,$runtime_string_next) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $limit = AttrVal($name, "limit", 1000); + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my $fetchroute = AttrVal($name, "fetchRoute", "descent"); + my $valfilter = AttrVal($name, "valueFilter", undef); # nur Anzeige von Datensätzen die "valueFilter" enthalten + $fetchroute = ($fetchroute eq "descent")?"DESC":"ASC"; + my ($err,$dbh,$sth,$sql,$rowlist,$nrows); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err|''"; + } + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # SQL zusammenstellen für DB-Abfrage + if ($IsTimeSet) { + $sql = DbRep_createSelectSql($hash,$table,"DEVICE,READING,TIMESTAMP,VALUE,UNIT",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'","ORDER BY TIMESTAMP $fetchroute LIMIT ".($limit+1)); + } else { + $sql = DbRep_createSelectSql($hash,$table,"DEVICE,READING,TIMESTAMP,VALUE,UNIT",$device,$reading,undef,undef,"ORDER BY TIMESTAMP $fetchroute LIMIT ".($limit+1)); + } + + $sth = $dbh->prepare($sql); + + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + eval{$sth->execute();}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err|''"; + } + + no warnings 'uninitialized'; + my @row_array = map { $_->[0]."_ESC_".$_->[1]."_ESC_".($_->[2] =~ s/ /_ESC_/r)."_ESC_".$_->[3]."_ESC_".$_->[4]."\n" } @{$sth->fetchall_arrayref()}; + + # eventuell gesetzten Datensatz-Filter anwenden + if($valfilter) { + my @fiarr; + foreach my $row (@row_array) { + next if($row !~ /$valfilter/); + push @fiarr,$row; + } + @row_array = @fiarr; + } + + use warnings; + $nrows = $#row_array+1; # Anzahl der Ergebniselemente + pop @row_array if($nrows>$limit); # das zuviel selektierte Element wegpoppen wenn Limit überschritten + + s/\|/_E#S#C_/g for @row_array; # escape Pipe "|" + if ($utf8) { + $rowlist = Encode::encode_utf8(join('|', @row_array)); + } else { + $rowlist = join('|', @row_array); + } + Log3 ($name, 5, "DbRep $name -> row result list:\n$rowlist"); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + $dbh->disconnect; + + # Daten müssen als Einzeiler zurückgegeben werden + $rowlist = encode_base64($rowlist,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rowlist|$rt|0|$nrows"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage fetchrows +#################################################################################################### +sub fetchrows_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $rowlist = decode_base64($a[1]); + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $nrows = $a[4]; + my $name = $hash->{NAME}; + my $reading = AttrVal($name, "reading", undef); + my $limit = AttrVal($name, "limit", 1000); + my $color = ""; # Highlighting doppelter DB-Einträge + $color =~ s/#// if($color =~ /red|blue|brown|green|orange/); + my $ecolor = ""; # Ende Highlighting + my @row; + my $reading_runtime_string; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + my @row_array = split("\\|", $rowlist); + s/_E#S#C_/\|/g for @row_array; # escaped Pipe return to "|" + + Log3 ($name, 5, "DbRep $name - row_array decoded: @row_array"); + + # Readingaufbereitung + readingsBeginUpdate($hash); + my ($orow,$nrow,$oval,$nval); + my $dz = 1; # Index des Vorkommens im Selektionsarray + my $zs = ""; # Zusatz wenn device + Reading + Timestamp von folgenden DS gleich ist UND Value unterschiedlich + my $zsz = 1; # Zusatzzähler + foreach my $row (@row_array) { + my @a = split("_ESC_", $row, 6); + my $dev = $a[0]; + my $rea = $a[1]; + $a[3] =~ s/:/-/g; # substituieren unsupported characters ":" -> siehe fhem.pl + my $ts = $a[2]."_".$a[3]; + my $val = $a[4]; + my $unt = $a[5]; + $val = $unt?$val." ".$unt:$val; + + $nrow = $ts.$dev.$rea; + $nval = $val; + if($orow) { + if($orow.$oval eq $nrow.$val) { + $dz++; + $zs = ""; + $zsz = 1; + } else { + # wenn device + Reading + Timestamp gleich ist UND Value unterschiedlich -> dann Zusatz an Reading hängen + if(($orow eq $nrow) && ($oval ne $val)) { + $zs = "_".$zsz; + $zsz++; + } else { + $zs = ""; + $zsz = 1; + } + $dz = 1; + + } + } + $orow = $nrow; + $oval = $val; + + if ($reading && AttrVal($hash->{NAME}, "readingNameMap", "")) { + if($dz > 1 && AttrVal($name, "fetchMarkDuplicates", undef)) { + $reading_runtime_string = $ts."__".$color.$dz."__".AttrVal($hash->{NAME}, "readingNameMap", "").$zs.$ecolor; + } else { + $reading_runtime_string = $ts."__".$dz."__".AttrVal($hash->{NAME}, "readingNameMap", "").$zs; + } + } else { + if($dz > 1 && AttrVal($name, "fetchMarkDuplicates", undef)) { + $reading_runtime_string = $ts."__".$color.$dz."__".$dev."__".$rea.$zs.$ecolor; + } else { + $reading_runtime_string = $ts."__".$dz."__".$dev."__".$rea.$zs; + } + } + + ReadingsBulkUpdateValue($hash, $reading_runtime_string, $val); + } + my $sfx = AttrVal("global", "language", "EN"); + $sfx = ($sfx eq "EN" ? "" : "_$sfx"); + + ReadingsBulkUpdateValue($hash, "number_fetched_rows", ($nrows>$limit)?$nrows-1:$nrows); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,($nrows-$limit>0)? + "done - Warning: present rows exceed specified limit, adjust attribute limit":"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# DB-Abfrage delSeqDoublets +#################################################################################################### +sub delseqdoubl_DoParse($) { + my ($string) = @_; + my ($name,$opt,$device,$reading,$ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my $limit = AttrVal($name, "limit", 1000); + my $var = AttrVal($name, "seqDoubletsVariance", undef); + my $table = "history"; + my ($err,$dbh,$sth,$sql,$rowlist,$nrows,$selspec,$st,$var1,$var2); + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err|''|$opt"; + } + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + $selspec = "DEVICE,READING,TIMESTAMP,VALUE"; + + # SQL zusammenstellen für DB-Abfrage + $sql = DbRep_createSelectSql($hash,$table,$selspec,$device,$reading,"?","?","ORDER BY DEVICE,READING,TIMESTAMP ASC"); + $sth = $dbh->prepare_cached($sql); + + # DB-Abfrage zeilenweise für jeden Timearray-Eintrag + my @remain; + my @todel; + my $nremain = 0; + my $ntodel = 0; + my $ndel = 0; + my $rt = 0; + + no warnings 'uninitialized'; + + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + $runtime_string = encode_base64($runtime_string,""); + + # SQL-Startzeit + $st = [gettimeofday]; + + # SQL zusammenstellen für Logausgabe + my $sql1 = DbRep_createSelectSql($hash,$table,$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",''); + Log3 ($name, 4, "DbRep $name - SQL execute: $sql1"); + + eval{$sth->execute($runtime_string_first, $runtime_string_next);}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err|''|$opt"; + } + + # SQL-Laufzeit ermitteln + $rt = $rt+tv_interval($st); + + # Beginn Löschlogik, Zusammenstellen der löschenden DS (warping) + # Array @sel -> die VERBLEIBENDEN Datensätze, @warp -> die zu löschenden Datensätze + my (@sel,@warp); + my ($or,$oor,$odev,$oread,$oval,$ooval,$ndev,$nread,$nval); + my $i = 0; + foreach my $nr (map { $_->[0]."_ESC_".$_->[1]."_ESC_".($_->[2] =~ s/ /_ESC_/r)."_ESC_".$_->[3] } @{$sth->fetchall_arrayref()}) { + ($ndev,$nread,undef,undef,$nval) = split("_ESC_", $nr); # Werte des aktuellen Elements + $or = pop @sel; # das letzte Element der Liste + ($odev,$oread,undef,undef,$oval) = split("_ESC_", $or); # Value des letzten Elements + if (looks_like_number($oval) && $var) { # Varianz +- falls $val numerischer Wert + $var1 = $oval + $var; + $var2 = $oval - $var; + } else { + undef $var1; + undef $var2; + } + $oor = pop @sel; # das vorletzte Element der Liste + $ooval = (split '_ESC_', $oor)[-1]; # Value des vorletzten Elements + if ($ndev.$nread ne $odev.$oread) { + $i = 0; # neues Device/Reading in einer Periode -> ooor soll erhalten bleiben + push (@sel,$oor) if($oor); + push (@sel,$or) if($or); + push (@sel,$nr); + } elsif ($i>=2 && ($ooval eq $oval && $oval eq $nval) || ($i>=2 && $var1 && $var2 && ($ooval <= $var1) && ($var2 <= $ooval) && ($nval <= $var1) && ($var2 <= $nval)) ) { + push (@sel,$oor); + push (@sel,$nr); + push (@warp,$or); # Array der zu löschenden Datensätze + if ($opt =~ /delete/ && $or) { # delete Datensätze + my ($dev,$read,$date,$time,$val) = split("_ESC_", $or); + my $dt = $date." ".$time; + chomp($val); + $dev =~ s/'/''/g; # escape ' with '' + $read =~ s/'/''/g; # escape ' with '' + $val =~ s/'/''/g; # escape ' with '' + $st = [gettimeofday]; + my $dsql = "delete FROM $table where TIMESTAMP = '$dt' AND DEVICE = '$dev' AND READING = '$read' AND VALUE = '$val';"; + my $sthd = $dbh->prepare($dsql); + Log3 ($name, 4, "DbRep $name - SQL execute: $dsql"); + + eval {$sthd->execute();}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err|''|$opt"; + } + $ndel = $ndel+$sthd->rows; + $dbh->commit() if(!$dbh->{AutoCommit}); + + $rt = $rt+tv_interval($st); + } + } else { + push (@sel,$oor) if($oor); + push (@sel,$or) if($or); + push (@sel,$nr); + } + $i++; + } + if(@sel && $opt =~ /adviceRemain/) { + # die verbleibenden Datensätze nach Ausführung (nur zur Anzeige) + push(@remain,@sel) if($#remain+1 < $limit); + } + if(@warp && $opt =~ /adviceDelete/) { + # die zu löschenden Datensätze (nur zur Anzeige) + push(@todel,@warp) if($#todel+1 < $limit); + } + + $nremain = $nremain + $#sel+1 if(@sel); + $ntodel = $ntodel + $#warp+1 if(@warp); + my $sum = $nremain+$ntodel; + Log3 ($name, 3, "DbRep $name -> rows analyzed by \"$hash->{LASTCMD}\": $sum") if($sum && $opt =~ /advice/); + } + + Log3 ($name, 3, "DbRep $name -> rows deleted by \"$hash->{LASTCMD}\": $ndel") if($ndel); + + my $retn = ($opt =~ /adviceRemain/)?$nremain:($opt =~ /adviceDelete/)?$ntodel:$ndel; + + my @retarray = ($opt =~ /adviceRemain/)?@remain:($opt =~ /adviceDelete/)?@todel:" "; + s/\|/_E#S#C_/g for @retarray; # escape Pipe "|" + if ($utf8 && @retarray) { + $rowlist = Encode::encode_utf8(join('|', @retarray)); + } elsif(@retarray) { + $rowlist = join('|', @retarray); + } else { + $rowlist = 0; + } + + use warnings; + Log3 ($name, 5, "DbRep $name -> row result list:\n$rowlist"); + + $dbh->disconnect; + + # Daten müssen als Einzeiler zurückgegeben werden + $rowlist = encode_base64($rowlist,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + +return "$name|$rowlist|$rt|0|$retn|$opt"; +} + +#################################################################################################### +# Auswertungsroutine delSeqDoublets +#################################################################################################### +sub delseqdoubl_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $rowlist = decode_base64($a[1]); + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $nrows = $a[4]; + my $opt = $a[5]; + my $name = $hash->{NAME}; + my $reading = AttrVal($name, "reading", undef); + my $limit = AttrVal($name, "limit", 1000); + my @row; + my $l = 1; + my $reading_runtime_string; + my $erread; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "delSeq"); + + # Readingaufbereitung + readingsBeginUpdate($hash); + + no warnings 'uninitialized'; + if ($opt !~ /delete/ && $rowlist) { + my @row_array = split("\\|", $rowlist); + s/_E#S#C_/\|/g for @row_array; # escaped Pipe return to "|" + Log3 ($name, 5, "DbRep $name - row_array decoded: @row_array"); + foreach my $row (@row_array) { + last if($l >= $limit); + my @a = split("_ESC_", $row, 5); + my $dev = $a[0]; + my $rea = $a[1]; + $a[3] =~ s/:/-/g; # substituieren unsupported characters ":" -> siehe fhem.pl + my $ts = $a[2]."_".$a[3]; + my $val = $a[4]; + + if ($reading && AttrVal($hash->{NAME}, "readingNameMap", "")) { + $reading_runtime_string = $ts."__".AttrVal($hash->{NAME}, "readingNameMap", "") ; + } else { + $reading_runtime_string = $ts."__".$dev."__".$rea; + } + ReadingsBulkUpdateValue($hash, $reading_runtime_string, $val); + $l++; + } + } + + use warnings; + my $sfx = AttrVal("global", "language", "EN"); + $sfx = ($sfx eq "EN" ? "" : "_$sfx"); + + my $rnam = ($opt =~ /adviceRemain/)?"number_rows_to_remain":($opt =~ /adviceDelete/)?"number_rows_to_delete":"number_rows_deleted"; + ReadingsBulkUpdateValue($hash, "$rnam", "$nrows"); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,($l >= $limit)? + "done - Warning: not all items are shown, adjust attribute limit if you want see more":"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); +return; +} + +#################################################################################################### +# nichtblockierende DB-Funktion expfile +#################################################################################################### +sub expfile_DoParse($) { + my ($string) = @_; + my ($name, $device, $reading, $rsf, $file, $ts) = split("\\§", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my ($dbh,$sth,$sql); + my $err=0; + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err|''|''|''"; + } + + $rsf =~ s/[:\s]/_/g; + my $outfile = $file?$file:AttrVal($name, "expimpfile", undef); + $outfile =~ s/%TSB/$rsf/g; + my @t = localtime; + $outfile = ResolveDateWildcards($outfile, @t); + if (open(FH, ">:utf8", "$outfile")) { + binmode (FH) if(!$utf8); + } else { + $err = encode_base64("could not open ".$outfile.": ".$!,""); + return "$name|''|''|$err|''|''|''"; + } + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + my $arrstr; + my $nrows = 0; + my $addon = "ORDER BY TIMESTAMP"; + no warnings 'uninitialized'; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history","TIMESTAMP,DEVICE,TYPE,EVENT,READING,VALUE,UNIT",$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addon); + } else { + $sql = DbRep_createSelectSql($hash,"history","TIMESTAMP,DEVICE,TYPE,EVENT,READING,VALUE,UNIT",$device,$reading,undef,undef,$addon); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err|''|''|''"; + } + + while (my $row = $sth->fetchrow_arrayref) { + print FH DbRep_charfilter(join(',', map { s{"}{""}g; "\"$_\"";} @$row)), "\n"; + Log3 ($name, 5, "DbRep $name -> write row: @$row"); + # Anzahl der Datensätze + $nrows++; + } + + } + close(FH); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + $sth->finish; + $dbh->disconnect; + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$nrows|$rt|$err|$device|$reading|$outfile"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Funktion expfile +#################################################################################################### +sub expfile_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $nrows = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $name = $hash->{NAME}; + my $device = $a[4]; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $reading = $a[5]; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $outfile = $a[6]; + my $erread; + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "export"); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + my $ds = $device." -- " if ($device); + my $rds = $reading." -- " if ($reading); + my $export_string = $ds.$rds." -- ROWS EXPORTED TO FILE -- "; + + my $state = $erread?$erread:"done"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, $export_string, $nrows); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,$state); + readingsEndUpdate($hash, 1); + + my $rows = $ds.$rds.$nrows; + Log3 ($name, 3, "DbRep $name - Number of exported datasets from $hash->{DATABASE} to file $outfile: ".$rows); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Funktion impfile +#################################################################################################### +sub impfile_Push($) { + my ($string) = @_; + my ($name, $rsf, $file) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my $err=0; + my $sth; + + # Background-Startzeit + my $bst = [gettimeofday]; + + my $dbh; + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err|''"; + } + + # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK + my ($usepkh,$usepkc,$pkh,$pkc) = DbRep_checkUsePK($hash,$dbloghash,$dbh); + + $rsf =~ s/[:\s]/_/g; + my $infile = $file?$file:AttrVal($name, "expimpfile", undef); + $infile =~ s/%TSB/$rsf/g; + my @t = localtime; + $infile = ResolveDateWildcards($infile, @t); + if (open(FH, "<:utf8", "$infile")) { + binmode (FH) if(!$utf8); + } else { + $err = encode_base64("could not open ".$infile.": ".$!,""); + return "$name|''|''|$err|''"; + } + + # only for this block because of warnings if details inline is not set + no warnings 'uninitialized'; + + # SQL-Startzeit + my $st = [gettimeofday]; + + my $al; + # Datei zeilenweise einlesen und verarbeiten ! + # Beispiel Inline: + # "2016-09-25 08:53:56","STP_5000","SMAUTILS","etotal: 11859.573","etotal","11859.573","" + + # insert history mit/ohne primary key + if ($usepkh && $dbloghash->{MODEL} eq 'MYSQL') { + eval { $sth = $dbh->prepare_cached("INSERT IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'SQLITE') { + eval { $sth = $dbh->prepare_cached("INSERT OR IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'POSTGRESQL') { + eval { $sth = $dbh->prepare_cached("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; + } else { + eval { $sth = $dbh->prepare_cached("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect(); + return "$name|''|''|$err|''"; + } + + $dbh->begin_work(); + + my $irowdone = 0; + my $irowcount = 0; + my $warn = 0; + while () { + $al = $_; + chomp $al; + my @alarr = split("\",\"", $al); + foreach(@alarr) { + tr/"//d; + } + my $i_timestamp = $alarr[0]; + # $i_timestamp =~ tr/"//d; + my $i_device = $alarr[1]; + my $i_type = $alarr[2]; + my $i_event = $alarr[3]; + my $i_reading = $alarr[4]; + my $i_value = $alarr[5]; + my $i_unit = $alarr[6] ? $alarr[6]: " "; + $irowcount++; + next if(!$i_timestamp); #leerer Datensatz + + # check ob TIMESTAMP Format ok ? + my ($i_date, $i_time) = split(" ",$i_timestamp); + if ($i_date !~ /(\d{4})-(\d{2})-(\d{2})/ || $i_time !~ /(\d{2}):(\d{2}):(\d{2})/) { + $err = encode_base64("Format of date/time is not valid in row $irowcount of $infile. Must be format \"YYYY-MM-DD HH:MM:SS\" !",""); + Log3 ($name, 2, "DbRep $name -> ERROR - Import from file $infile was not done. Invalid date/time field format in row $irowcount."); + close(FH); + $dbh->rollback; + return "$name|''|''|$err|''"; + } + + # Daten auf maximale Länge (entsprechend der Feldlänge in DbLog DB create-scripts) beschneiden wenn nicht SQLite + if ($dbmodel ne 'SQLITE') { + $i_device = substr($i_device,0, $dbrep_col{DEVICE}); + $i_event = substr($i_event,0, $dbrep_col{EVENT}); + $i_reading = substr($i_reading,0, $dbrep_col{READING}); + $i_value = substr($i_value,0, $dbrep_col{VALUE}); + $i_unit = substr($i_unit,0, $dbrep_col{UNIT}) if($i_unit); + } + + Log3 ($name, 5, "DbRep $name -> data to insert Timestamp: $i_timestamp, Device: $i_device, Type: $i_type, Event: $i_event, Reading: $i_reading, Value: $i_value, Unit: $i_unit"); + + if($i_timestamp && $i_device && $i_reading) { + + eval {$sth->execute($i_timestamp, $i_device, $i_type, $i_event, $i_reading, $i_value, $i_unit);}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Failed to insert new dataset into database: $@"); + close(FH); + $dbh->rollback; + $dbh->disconnect; + return "$name|''|''|$err|''"; + } else { + $irowdone++ + } + + } else { + my $c = !$i_timestamp?"field \"timestamp\" is empty":!$i_device?"field \"device\" is empty":"field \"reading\" is empty"; + $err = encode_base64("format error in in row $irowcount of $infile - cause: $c",""); + Log3 ($name, 2, "DbRep $name -> ERROR - Import of datasets NOT done. Formaterror in row $irowcount of $infile - cause: $c"); + close(FH); + $dbh->rollback; + $dbh->disconnect; + return "$name|''|''|$err|''"; + } + } + + $dbh->commit; + $dbh->disconnect; + close(FH); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$irowdone|$rt|$err|$infile"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Funktion impfile +#################################################################################################### +sub impfile_PushDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $irowdone = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + my $name = $hash->{NAME}; + my $infile = $a[4]; + my $erread; + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "import"); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + my $import_string = " -- ROWS IMPORTED FROM FILE -- "; + + my $state = $erread?$erread:"done"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, $import_string, $irowdone); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,$state); + readingsEndUpdate($hash, 1); + + Log3 ($name, 3, "DbRep $name - Number of imported datasets to $hash->{DATABASE} from file $infile: $irowdone"); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage sqlCmd - generischer SQL-Befehl - name | opt | sqlcommand +#################################################################################################### +# set logdbrep sqlCmd select count(*) from history +# set logdbrep sqlCmd select DEVICE,count(*) from history group by DEVICE HAVING count(*) > 10000 +sub sqlCmd_DoParse($) { + my ($string) = @_; + my ($name, $opt, $runtime_string_first, $runtime_string_next, $cmd) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my $srs = AttrVal($name, "sqlResultFieldSep", "|"); + my $err; + + # Background-Startzeit + my $bst = [gettimeofday]; + + my $dbh; + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$opt|$cmd|''|''|$err"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + my $sql = ($cmd =~ m/\;$/)?$cmd:$cmd.";"; + # Allow inplace replacement of keywords for timings (use time attribute syntax) + $sql =~ s/§timestamp_begin§/'$runtime_string_first'/g; + $sql =~ s/§timestamp_end§/'$runtime_string_next'/g; + +# Debug "SQL :".$sql.":"; + + Log3($name, 4, "DbRep $name - SQL execute: $sql"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my ($sth,$r); + + eval {$sth = $dbh->prepare($sql); + $r = $sth->execute(); + }; + + if ($@) { + # error bei sql-execute + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - ERROR - $@"); + $dbh->disconnect; + return "$name|''|$opt|$sql|''|''|$err"; + } + + my @rows; + my $nrows = 0; + if($sql =~ m/^\s*(select|pragma|show)/is) { + while (my @line = $sth->fetchrow_array()) { + Log3 ($name, 4, "DbRep $name - SQL result: @line"); + my $row = join("$srs", @line); + + # join Delimiter "§" escapen + $row =~ s/§/|°escaped°|/g; + + push(@rows, $row); + # Anzahl der Datensätze + $nrows++; + } + } else { + $nrows = $sth->rows; + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - ERROR - $@"); + $dbh->disconnect; + return "$name|''|$opt|$sql|''|''|$err"; + } + + push(@rows, $r); + my $com = (split(" ",$sql, 2))[0]; + Log3 ($name, 3, "DbRep $name - Number of entries processed in db $hash->{DATABASE}: $nrows by $com"); + } + + $sth->finish; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + $dbh->disconnect; + + # Daten müssen als Einzeiler zurückgegeben werden + my $rowstring = join("§", @rows); + $rowstring = encode_base64($rowstring,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rowstring|$opt|$sql|$nrows|$rt|$err"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage sqlCmd +#################################################################################################### +sub sqlCmd_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $rowstring = decode_base64($a[1]); + my $opt = $a[2]; + my $cmd = $a[3]; + my $nrows = $a[4]; + my $bt = $a[5]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[6]?decode_base64($a[6]):undef; + my $srf = AttrVal($name, "sqlResultFormat", "separated"); + my $srs = AttrVal($name, "sqlResultFieldSep", "|"); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + Log3 ($name, 5, "DbRep $name - SQL result decoded: $rowstring") if($rowstring); + + no warnings 'uninitialized'; + + # Readingaufbereitung + readingsBeginUpdate($hash); + + ReadingsBulkUpdateValue ($hash, "sqlCmd", $cmd); + ReadingsBulkUpdateValue ($hash, "sqlResultNumRows", $nrows); + + # Drop-Down Liste bisherige sqlCmd-Befehle füllen und in Key-File sichern + # my $hl = $hash->{HELPER}{SQLHIST}; + my @sqlhist = split(",",$hash->{HELPER}{SQLHIST}); + $cmd =~ s/\s/ /g; + $cmd =~ s/,//g; + my $hlc = AttrVal($name, "sqlCmdHistoryLength", 0); # Anzahl der Einträge in Drop-Down Liste + if(!@sqlhist || (@sqlhist && !($cmd ~~ @sqlhist))) { + unshift @sqlhist,$cmd; + pop @sqlhist if(@sqlhist > $hlc); + my $hl = join(",",@sqlhist); + $hash->{HELPER}{SQLHIST} = $hl; + DbRep_setCmdFile($name."_sqlCmdList",$hl,$hash); + } + + if ($srf eq "sline") { + $rowstring =~ s/§/]|[/g; + $rowstring =~ s/\|°escaped°\|/§/g; + ReadingsBulkUpdateValue ($hash, "SqlResult", $rowstring); + + } elsif ($srf eq "table") { + my $res = ""; + my @rows = split( /§/, $rowstring ); + my $row; + foreach $row ( @rows ) { + $row =~ s/\|°escaped°\|/§/g; + $row =~ s/$srs/\|/g if($srs !~ /\|/); + $row =~ s/\|/<\/td>"; + } + $row .= $res."
/g; + $res .= "
".$row."
"; + + ReadingsBulkUpdateValue ($hash,"SqlResult", $row); + + } elsif ($srf eq "mline") { + my $res = ""; + my @rows = split( /§/, $rowstring ); + my $row; + foreach $row ( @rows ) { + $row =~ s/\|°escaped°\|/§/g; + $res .= $row."
"; + } + $row .= $res.""; + + ReadingsBulkUpdateValue ($hash, "SqlResult", $row ); + + } elsif ($srf eq "separated") { + my @rows = split( /§/, $rowstring ); + my $bigint = @rows; + my $numd = ceil(log10($bigint)); + my $formatstr = sprintf('%%%d.%dd', $numd, $numd); + my $i = 0; + foreach my $row ( @rows ) { + $i++; + $row =~ s/\|°escaped°\|/§/g; + my $fi = sprintf($formatstr, $i); + ReadingsBulkUpdateValue ($hash, "SqlResultRow_".$fi, $row); + } + } elsif ($srf eq "json") { + my %result = (); + my @rows = split( /§/, $rowstring ); + my $bigint = @rows; + my $numd = ceil(log10($bigint)); + my $formatstr = sprintf('%%%d.%dd', $numd, $numd); + my $i = 0; + foreach my $row ( @rows ) { + $i++; + $row =~ s/\|°escaped°\|/§/g; + my $fi = sprintf($formatstr, $i); + $result{$fi} = $row; + } + my $json = toJSON(\%result); # at least fhem.pl 14348 2017-05-22 20:25:06Z + ReadingsBulkUpdateValue ($hash, "SqlResult", $json); + } + + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# nichtblockierende DB-Abfrage get db Metadaten +#################################################################################################### +sub dbmeta_DoParse($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $name = $a[0]; + my $hash = $defs{$name}; + my $opt = $a[1]; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $db = $hash->{DATABASE}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dbmodel = $dbloghash->{MODEL}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my ($dbh,$sth,$sql); + my $err; + + # Background-Startzeit + my $bst = [gettimeofday]; + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|''|$err"; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # Liste der anzuzeigenden Parameter erzeugen, sonst alle ("%"), abhängig von $opt + my $param = AttrVal($name, "showVariables", "%") if($opt eq "dbvars"); + $param = AttrVal($name, "showSvrInfo", "[A-Z_]") if($opt eq "svrinfo"); + $param = AttrVal($name, "showStatus", "%") if($opt eq "dbstatus"); + $param = "1" if($opt =~ /tableinfo|procinfo/); # Dummy-Eintrag für einen Schleifendurchlauf + my @parlist = split(",",$param); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my @row_array; + + # due to incompatible changes made in MyQL 5.7.5, see http://johnemb.blogspot.de/2014/09/adding-or-removing-individual-sql-modes.html + if($dbmodel eq "MYSQL") { + eval {$dbh->do("SET sql_mode=(SELECT REPLACE(\@\@sql_mode,'ONLY_FULL_GROUP_BY',''));");}; + } + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|''|$err"; + } + + if ($opt ne "svrinfo") { + foreach my $ple (@parlist) { + if ($opt eq "dbvars") { + $sql = "show variables like '$ple';"; + } elsif ($opt eq "dbstatus") { + $sql = "show global status like '$ple';"; + } elsif ($opt eq "tableinfo") { + $sql = "show Table Status from $db;"; + } elsif ($opt eq "procinfo") { + $sql = "show full processlist;"; + } + + Log3($name, 4, "DbRep $name - SQL execute: $sql"); + + $sth = $dbh->prepare($sql); + eval {$sth->execute();}; + + if ($@) { + # error bei sql-execute + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|''|$err"; + + } else { + # kein error bei sql-execute + if ($opt eq "tableinfo") { + $param = AttrVal($name, "showTableInfo", "[A-Z_]"); + $param =~ s/,/\|/g; + $param =~ tr/%//d; + while ( my $line = $sth->fetchrow_hashref()) { + + Log3 ($name, 5, "DbRep $name - SQL result: $line->{Name}, $line->{Version}, $line->{Row_format}, $line->{Rows}, $line->{Avg_row_length}, $line->{Data_length}, $line->{Max_data_length}, $line->{Index_length}, $line->{Data_free}, $line->{Auto_increment}, $line->{Create_time}, $line->{Check_time}, $line->{Collation}, $line->{Checksum}, $line->{Create_options}, $line->{Comment}"); + + if($line->{Name} =~ m/($param)/i) { + push(@row_array, $line->{Name}.".engine ".$line->{Engine}) if($line->{Engine}); + push(@row_array, $line->{Name}.".version ".$line->{Version}) if($line->{Version}); + push(@row_array, $line->{Name}.".row_format ".$line->{Row_format}) if($line->{Row_format}); + push(@row_array, $line->{Name}.".number_of_rows ".$line->{Rows}) if($line->{Rows}); + push(@row_array, $line->{Name}.".avg_row_length ".$line->{Avg_row_length}) if($line->{Avg_row_length}); + push(@row_array, $line->{Name}.".data_length_MB ".sprintf("%.2f",$line->{Data_length}/1024/1024)) if($line->{Data_length}); + push(@row_array, $line->{Name}.".max_data_length_MB ".sprintf("%.2f",$line->{Max_data_length}/1024/1024)) if($line->{Max_data_length}); + push(@row_array, $line->{Name}.".index_length_MB ".sprintf("%.2f",$line->{Index_length}/1024/1024)) if($line->{Index_length}); + push(@row_array, $line->{Name}.".data_index_length_MB ".sprintf("%.2f",($line->{Data_length}+$line->{Index_length})/1024/1024)); + push(@row_array, $line->{Name}.".data_free_MB ".sprintf("%.2f",$line->{Data_free}/1024/1024)) if($line->{Data_free}); + push(@row_array, $line->{Name}.".auto_increment ".$line->{Auto_increment}) if($line->{Auto_increment}); + push(@row_array, $line->{Name}.".create_time ".$line->{Create_time}) if($line->{Create_time}); + push(@row_array, $line->{Name}.".update_time ".$line->{Update_time}) if($line->{Update_time}); + push(@row_array, $line->{Name}.".check_time ".$line->{Check_time}) if($line->{Check_time}); + push(@row_array, $line->{Name}.".collation ".$line->{Collation}) if($line->{Collation}); + push(@row_array, $line->{Name}.".checksum ".$line->{Checksum}) if($line->{Checksum}); + push(@row_array, $line->{Name}.".create_options ".$line->{Create_options}) if($line->{Create_options}); + push(@row_array, $line->{Name}.".comment ".$line->{Comment}) if($line->{Comment}); + } + } + } elsif ($opt eq "procinfo") { + my $res = ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + while (my @line = $sth->fetchrow_array()) { + Log3 ($name, 4, "DbRep $name - SQL result: @line"); + my $row = join("|", @line); + $row =~ tr/ A-Za-z0-9!"#$§%&'()*+,-.\/:;<=>?@[\]^_`{|}~//cd; + $row =~ s/\|/<\/td>"; + } + my $tab .= $res."
IDUSERHOSTDBCMDTIME_SecSTATEINFOPROGRESS
/g; + $res .= "
".$row."
"; + push(@row_array, "ProcessList ".$tab); + + } else { + while (my @line = $sth->fetchrow_array()) { + Log3 ($name, 4, "DbRep $name - SQL result: @line"); + my $row = join("§", @line); + $row =~ s/ /_/g; + @line = split("§", $row); + push(@row_array, $line[0]." ".$line[1]); + } + } + } + $sth->finish; + } + } else { + $param =~ s/,/\|/g; + $param =~ tr/%//d; + # Log3 ($name, 5, "DbRep $name - showDbInfo: $param"); + + if($dbmodel eq 'SQLITE') { + my $sf = $dbh->sqlite_db_filename(); + if ($@) { + # error bei sql-execute + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|''|$err"; + } else { + # kein error bei sql-execute + my $key = "SQLITE_DB_FILENAME"; + push(@row_array, $key." ".$sf) if($key =~ m/($param)/i); + } + my @a = split(' ',qx(du -m $hash->{DATABASE})) if ($^O =~ m/linux/i || $^O =~ m/unix/i); + my $key = "SQLITE_FILE_SIZE_MB"; + push(@row_array, $key." ".$a[0]) if($key =~ m/($param)/i); + } + + my $info; + while( my ($key,$value) = each(%GetInfoType) ) { + eval { $info = $dbh->get_info($GetInfoType{"$key"}) }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|''|$err"; + } else { + if($utf8) { + $info = Encode::encode_utf8($info) if($info); + } + push(@row_array, $key." ".$info) if($key =~ m/($param)/i); + } + } + } + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + $dbh->disconnect; + + my $rowlist = join('§', @row_array); + Log3 ($name, 5, "DbRep $name -> row_array: \n@row_array"); + + # Daten müssen als Einzeiler zurückgegeben werden + $rowlist = encode_base64($rowlist,""); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$rowlist|$rt|$opt|0"; +} + +#################################################################################################### +# Auswertungsroutine der nichtblockierenden DB-Abfrage get db Metadaten +#################################################################################################### +sub dbmeta_ParseDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $rowlist = decode_base64($a[1]); + my $bt = $a[2]; + my $opt = $a[3]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[4]?decode_base64($a[4]):undef; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + # Readingaufbereitung + readingsBeginUpdate($hash); + + my @row_array = split("§", $rowlist); + Log3 ($name, 5, "DbRep $name - SQL result decoded: \n@row_array") if(@row_array); + + my $pre = ""; + $pre = "VAR_" if($opt eq "dbvars"); + $pre = "STAT_" if($opt eq "dbstatus"); + $pre = "INFO_" if($opt eq "tableinfo"); + + foreach my $row (@row_array) { + my @a = split(" ", $row, 2); + my $k = $a[0]; + my $v = $a[1]; + ReadingsBulkUpdateValue ($hash, $pre.$k, $v); + } + + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + # InternalTimer(time+0.5, "browser_refresh", $hash, 0); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# optimize Tables alle Datenbanken +#################################################################################################### +sub DbRep_optimizeTables($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbmodel = $dbloghash->{MODEL}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dbname = $hash->{DATABASE}; + my $value = 0; + my ($dbh,$sth,$query,$err,$r,$db_MB_start,$db_MB_end); + my (%db_tables,@tablenames); + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$err|''|''"; + } + + # SQL-Startzeit + my $st = [gettimeofday]; + + if ($dbmodel =~ /MYSQL/) { + # Eigenschaften der vorhandenen Tabellen ermitteln (SHOW TABLE STATUS -> Rows sind nicht exakt !!) + $query = "SHOW TABLE STATUS FROM `$dbname`"; + + Log3 ($name, 5, "DbRep $name - current query: $query "); + Log3 ($name, 3, "DbRep $name - Searching for tables inside database $dbname...."); + + eval { $sth = $dbh->prepare($query); + $sth->execute; + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! MySQL-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + while ( $value = $sth->fetchrow_hashref()) { + # verbose 5 logging + Log3 ($name, 5, "DbRep $name - ......... Table definition found: ........."); + foreach my $tk (sort(keys(%$value))) { + Log3 ($name, 5, "DbRep $name - $tk: $value->{$tk}") if(defined($value->{$tk}) && $tk ne "Rows"); + } + Log3 ($name, 5, "DbRep $name - ......... Table definition END ............"); + + # check for old MySQL3-Syntax Type=xxx + if (defined $value->{Type}) { + # port old index type to index engine, so we can use the index Engine in the rest of the script + $value->{Engine} = $value->{Type}; + } + $db_tables{$value->{Name}} = $value; + + } + + @tablenames = sort(keys(%db_tables)); + + if (@tablenames < 1) { + $err = "There are no tables inside database $dbname ! It doesn't make sense to backup an empty database. Skipping this one."; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($@,""); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + # Tabellen optimieren + $hash->{HELPER}{DBTABLES} = \%db_tables; + ($err,$db_MB_start,$db_MB_end) = DbRep_mysqlOptimizeTables($hash,$dbh,@tablenames); + if ($err) { + $err = encode_base64($err,""); + return "$name|''|$err|''|''"; + } + } + + if ($dbmodel =~ /SQLITE/) { + # Anfangsgröße ermitteln + $db_MB_start = (split(' ',qx(du -m $hash->{DATABASE})))[0] if ($^O =~ m/linux/i || $^O =~ m/unix/i); + Log3 ($name, 3, "DbRep $name - Size of database $dbname before optimize (MB): $db_MB_start"); + $query ="VACUUM"; + Log3 ($name, 5, "DbRep $name - current query: $query "); + + Log3 ($name, 3, "DbRep $name - VACUUM database $dbname...."); + eval {$sth = $dbh->prepare($query); + $r = $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! SQLite-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + # Endgröße ermitteln + $db_MB_end = (split(' ',qx(du -m $hash->{DATABASE})))[0] if ($^O =~ m/linux/i || $^O =~ m/unix/i); + Log3 ($name, 3, "DbRep $name - Size of database $dbname after optimize (MB): $db_MB_end"); + } + + if ($dbmodel =~ /POSTGRESQL/) { + # Anfangsgröße ermitteln + $query = "SELECT pg_size_pretty(pg_database_size('$dbname'))"; + Log3 ($name, 5, "DbRep $name - current query: $query "); + eval { $sth = $dbh->prepare($query); + $sth->execute; + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! PostgreSQL-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + $value = $sth->fetchrow(); + $value =~ tr/MB//d; + $db_MB_start = sprintf("%.2f",$value); + Log3 ($name, 3, "DbRep $name - Size of database $dbname before optimize (MB): $db_MB_start"); + + Log3 ($name, 3, "DbRep $name - VACUUM database $dbname...."); + + $query = "vacuum history"; + + Log3 ($name, 5, "DbRep $name - current query: $query "); + + eval {$sth = $dbh->prepare($query); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! PostgreSQL-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + # Endgröße ermitteln + $query = "SELECT pg_size_pretty(pg_database_size('$dbname'))"; + Log3 ($name, 5, "DbRep $name - current query: $query "); + eval { $sth = $dbh->prepare($query); + $sth->execute; + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! PostgreSQL-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + $value = $sth->fetchrow(); + $value =~ tr/MB//d; + $db_MB_end = sprintf("%.2f",$value); + Log3 ($name, 3, "DbRep $name - Size of database $dbname after optimize (MB): $db_MB_end"); + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + Log3 ($name, 3, "DbRep $name - Optimize tables of database $dbname finished, total time used: ".sprintf("%.0f",$brt)." sec."); + +return "$name|$rt|''|$db_MB_start|$db_MB_end"; +} + +#################################################################################################### +# Auswertungsroutine optimize tables +#################################################################################################### +sub DbRep_OptimizeDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $bt = $a[1]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[2]?decode_base64($a[2]):undef; + my $db_MB_start = $a[3]; + my $db_MB_end = $a[4]; + my $name = $hash->{NAME}; + my $erread; + + delete($hash->{HELPER}{RUNNING_OPTIMIZE}); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue($hash, "SizeDbBegin_MB", $db_MB_start); + ReadingsBulkUpdateValue($hash, "SizeDbEnd_MB", $db_MB_end); + readingsEndUpdate($hash, 1); + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "optimize"); + + my $state = $erread?$erread:"optimize tables finished"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateTimeState($hash,$brt,undef,$state); + readingsEndUpdate($hash, 1); + + Log3 ($name, 3, "DbRep $name - Optimize tables finished successfully. "); + +return; +} + +#################################################################################################### +# nicht blockierende Dump-Routine für MySQL (clientSide) +#################################################################################################### +sub mysql_DoDumpClientSide($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dbname = $hash->{DATABASE}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path = AttrVal($name, "dumpDirLocal", $dump_path_def); + $dump_path = $dump_path."/" unless($dump_path =~ m/\/$/); + my $optimize_tables_beforedump = AttrVal($name, "optimizeTablesBeforeDump", 0); + my $memory_limit = AttrVal($name, "dumpMemlimit", 100000); + my $my_comment = AttrVal($name, "dumpComment", ""); + my $dumpspeed = AttrVal($name, "dumpSpeed", 10000); + my $ebd = AttrVal($name, "executeBeforeProc", undef); + my $ead = AttrVal($name, "executeAfterProc", undef); + my $mysql_commentstring = "-- "; + my $character_set = "utf8"; + my $repver = $hash->{VERSION}; + my $sql_text = ''; + my $sql_file = ''; + my $dbpraefix = ""; + my ($dbh,$sth,$tablename,$sql_create,$rct,$insert,$first_insert,$backupfile,$drc,$drh,$e, + $sql_daten,$inhalt,$filesize,$totalrecords,$status_start,$status_end,$err,$db_MB_start,$db_MB_end); + my (@ar,@tablerecords,@tablenames,@tables,@ergebnis); + my (%db_tables); + + # Background-Startzeit + my $bst = [gettimeofday]; + + Log3 ($name, 3, "DbRep $name - Starting dump of database '$dbname'"); + + ##################### Beginn Dump ######################## + ############################################################## + + undef(%db_tables); + + # Startzeit ermitteln + my ($Sekunden, $Minuten, $Stunden, $Monatstag, $Monat, $Jahr, $Wochentag, $Jahrestag, $Sommerzeit) = localtime(time); + $Jahr += 1900; + $Monat += 1; + $Jahrestag += 1; + my $CTIME_String = strftime "%Y-%m-%d %T",localtime(time); + my $time_stamp = $Jahr."_".sprintf("%02d",$Monat)."_".sprintf("%02d",$Monatstag)."_".sprintf("%02d",$Stunden)."_".sprintf("%02d",$Minuten); + my $starttime = sprintf("%02d",$Monatstag).".".sprintf("%02d",$Monat).".".$Jahr." ".sprintf("%02d",$Stunden).":".sprintf("%02d",$Minuten); + + my $fieldlist = ""; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 2, "DbRep $name - $e"); + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + # SQL-Startzeit + my $st = [gettimeofday]; + + ##################### Mysql-Version ermitteln ######################## + eval { $sth = $dbh->prepare("SELECT VERSION()"); + $sth->execute; + }; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 2, "DbRep $name - $e"); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + my @mysql_version = $sth->fetchrow; + my @v = split(/\./,$mysql_version[0]); + + if($v[0] >= 5 || ($v[0] >= 4 && $v[1] >= 1) ) { + # mysql Version >= 4.1 + $sth = $dbh->prepare("SET NAMES '".$character_set."'"); + $sth->execute; + # get standard encoding of MySQl-Server + $sth = $dbh->prepare("SHOW VARIABLES LIKE 'character_set_connection'"); + $sth->execute; + @ar = $sth->fetchrow; + $character_set = $ar[1]; + } else { + # mysql Version < 4.1 -> no SET NAMES available + # get standard encoding of MySQl-Server + $sth = $dbh->prepare("SHOW VARIABLES LIKE 'character_set'"); + $sth->execute; + @ar = $sth->fetchrow; + if (defined($ar[1])) { $character_set=$ar[1]; } + } + Log3 ($name, 3, "DbRep $name - Characterset of collection and backup file set to $character_set. "); + + + # Eigenschaften der vorhandenen Tabellen ermitteln (SHOW TABLE STATUS -> Rows sind nicht exakt !!) + undef(@tables); + undef(@tablerecords); + my %db_tables_views; + my $t = 0; + my $r = 0; + my $st_e = "\n"; + my $value = 0; + my $engine = ''; + my $query ="SHOW TABLE STATUS FROM `$dbname`"; + + Log3 ($name, 5, "DbRep $name - current query: $query "); + + if ($dbpraefix ne "") { + $query.=" LIKE '$dbpraefix%'"; + Log3 ($name, 3, "DbRep $name - Searching for tables inside database $dbname with prefix $dbpraefix...."); + } else { + Log3 ($name, 3, "DbRep $name - Searching for tables inside database $dbname...."); + } + + eval { $sth = $dbh->prepare($query); + $sth->execute; + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! MySQL-Error: ".$@); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + while ( $value = $sth->fetchrow_hashref()) { + $value->{skip_data} = 0; #defaut -> backup data of table + + # verbose 5 logging + Log3 ($name, 5, "DbRep $name - ......... Table definition found: ........."); + foreach my $tk (sort(keys(%$value))) { + Log3 ($name, 5, "DbRep $name - $tk: $value->{$tk}") if(defined($value->{$tk}) && $tk ne "Rows"); + } + Log3 ($name, 5, "DbRep $name - ......... Table definition END ............"); + + # decide if we need to skip the data while dumping (VIEWs and MEMORY) + # check for old MySQL3-Syntax Type=xxx + + if (defined $value->{Type}) { + # port old index type to index engine, so we can use the index Engine in the rest of the script + $value->{Engine} = $value->{Type}; + $engine = uc($value->{Type}); + + if ($engine eq "MEMORY") { + $value->{skip_data} = 1; + } + } + + # check for > MySQL3 Engine = xxx + if (defined $value->{Engine}) { + $engine = uc($value->{Engine}); + + if ($engine eq "MEMORY") { + $value->{skip_data} = 1; + } + } + + # check for Views - if it is a view the comment starts with "VIEW" + if (defined $value->{Comment} && uc(substr($value->{Comment},0,4)) eq 'VIEW') { + $value->{skip_data} = 1; + $value->{Engine} = 'VIEW'; + $value->{Update_time} = ''; + $db_tables_views{$value->{Name}} = $value; + } else { + $db_tables{$value->{Name}} = $value; + } + + # cast indexes to int, cause they are used for builing the statusline + $value->{Rows} += 0; + $value->{Data_length} += 0; + $value->{Index_length} += 0; + } + $sth->finish; + + @tablenames = sort(keys(%db_tables)); + + # add VIEW at the end as they need all tables to be created before + @tablenames = (@tablenames,sort(keys(%db_tables_views))); + %db_tables = (%db_tables,%db_tables_views); + $tablename = ''; + + if (@tablenames < 1) { + $err = "There are no tables inside database $dbname ! It doesn't make sense to backup an empty database. Skipping this one."; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($@,""); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + if($optimize_tables_beforedump) { + # Tabellen optimieren vor dem Dump + $hash->{HELPER}{DBTABLES} = \%db_tables; + ($err,$db_MB_start,$db_MB_end) = DbRep_mysqlOptimizeTables($hash,$dbh,@tablenames); + if ($err) { + $err = encode_base64($err,""); + return "$name|''|$err|''|''|''|''|''|''|''"; + } + } + + # Tabelleneigenschaften für SQL-File ermitteln + $st_e .= "-- TABLE-INFO\n"; + + foreach $tablename (@tablenames) { + my $dump_table = 1; + + if ($dbpraefix ne "") { + if (substr($tablename,0,length($dbpraefix)) ne $dbpraefix) { + # exclude table from backup because it doesn't fit to praefix + $dump_table = 0; + } + } + + if ($dump_table == 1) { + # how many rows + $sql_create = "SELECT count(*) FROM `$tablename`"; + eval { $sth = $dbh->prepare($sql_create); + $sth->execute; + }; + if ($@) { + $e = $@; + $err = "Fatal error sending Query '".$sql_create."' ! MySQL-Error: ".$e; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($e,""); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + $db_tables{$tablename}{Rows} = $sth->fetchrow; + $sth->finish; + + $r += $db_tables{$tablename}{Rows}; + push(@tables,$db_tables{$tablename}{Name}); # add tablename to backuped tables + $t++; + + if (!defined $db_tables{$tablename}{Update_time}) { + $db_tables{$tablename}{Update_time} = 0; + } + + $st_e .= $mysql_commentstring."TABLE: $db_tables{$tablename}{Name} | Rows: $db_tables{$tablename}{Rows} | Length: ".($db_tables{$tablename}{Data_length}+$db_tables{$tablename}{Index_length})." | Engine: $db_tables{$tablename}{Engine}\n"; + if($db_tables{$tablename}{Name} eq "current") { + $drc = $db_tables{$tablename}{Rows}; + } + if($db_tables{$tablename}{Name} eq "history") { + $drh = $db_tables{$tablename}{Rows}; + } + } + } + $st_e .= "-- EOF TABLE-INFO"; + + Log3 ($name, 3, "DbRep $name - Found ".(@tables)." tables with $r records."); + + # AUFBAU der Statuszeile in SQL-File: + # -- Status | tabellenzahl | datensaetze | Datenbankname | Kommentar | MySQLVersion | Charset | EXTINFO + # + $status_start = $mysql_commentstring."Status | Tables: $t | Rows: $r "; + $status_end = "| DB: $dbname | Comment: $my_comment | MySQL-Version: $mysql_version[0] "; + $status_end .= "| Charset: $character_set $st_e\n". + $mysql_commentstring."Dump created on $CTIME_String by DbRep-Version $repver\n".$mysql_commentstring; + + $sql_text = $status_start.$status_end; + + # neues SQL-Ausgabefile anlegen + ($sql_text,$first_insert,$sql_file,$backupfile,$err) = DbRep_NewDumpFilename($sql_text,$dump_path,$dbname,$time_stamp,$character_set); + if ($err) { + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($err,""); + return "$name|''|$err|''|''|''|''|''|''|''"; + } else { + Log3 ($name, 5, "DbRep $name - New dumpfile $sql_file has been created."); + } + + ##################### jede einzelne Tabelle dumpen ######################## + + $totalrecords = 0; + + foreach $tablename (@tables) { + # first get CREATE TABLE Statement + if($dbpraefix eq "" || ($dbpraefix ne "" && substr($tablename,0,length($dbpraefix)) eq $dbpraefix)) { + Log3 ($name, 3, "DbRep $name - Dumping table $tablename (Type ".$db_tables{$tablename}{Engine}."):"); + + $a = "\n\n$mysql_commentstring\n$mysql_commentstring"."Table structure for table `$tablename`\n$mysql_commentstring\n"; + + if ($db_tables{$tablename}{Engine} ne 'VIEW' ) { + $a .= "DROP TABLE IF EXISTS `$tablename`;\n"; + } else { + $a .= "DROP VIEW IF EXISTS `$tablename`;\n"; + } + + $sql_text .= $a; + $sql_create = "SHOW CREATE TABLE `$tablename`"; + + Log3 ($name, 5, "DbRep $name - current query: $sql_create "); + + eval { $sth = $dbh->prepare($sql_create); + $sth->execute; + }; + if ($@) { + $e = $@; + $err = "Fatal error sending Query '".$sql_create."' ! MySQL-Error: ".$e; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($e,""); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + @ergebnis = $sth->fetchrow; + $sth->finish; + $a = $ergebnis[1].";\n"; + + if (length($a) < 10) { + $err = "Fatal error! Couldn't read CREATE-Statement of table `$tablename`! This backup might be incomplete! Check your database for errors. MySQL-Error: ".$DBI::errstr; + Log3 ($name, 2, "DbRep $name - $err"); + } else { + $sql_text .= $a; + # verbose 5 logging + Log3 ($name, 5, "DbRep $name - Create-SQL found:\n$a"); + } + + if ($db_tables{$tablename}{skip_data} == 0) { + $sql_text .= "\n$mysql_commentstring\n$mysql_commentstring"."Dumping data for table `$tablename`\n$mysql_commentstring\n"; + $sql_text .= "/*!40000 ALTER TABLE `$tablename` DISABLE KEYS */;"; + + DbRep_WriteToDumpFile($sql_text,$sql_file); + $sql_text = ""; + + # build fieldlist + $fieldlist = "("; + $sql_create = "SHOW FIELDS FROM `$tablename`"; + Log3 ($name, 5, "DbRep $name - current query: $sql_create "); + + eval { $sth = $dbh->prepare($sql_create); + $sth->execute; + }; + if ($@) { + $e = $@; + $err = "Fatal error sending Query '".$sql_create."' ! MySQL-Error: ".$e; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($e,""); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + while (@ar = $sth->fetchrow) { + $fieldlist .= "`".$ar[0]."`,"; + } + $sth->finish; + + # verbose 5 logging + Log3 ($name, 5, "DbRep $name - Fieldlist found: $fieldlist"); + + # remove trailing ',' and add ')' + $fieldlist = substr($fieldlist,0,length($fieldlist)-1).")"; + + # how many rows + $rct = $db_tables{$tablename}{Rows}; + Log3 ($name, 5, "DbRep $name - Number entries of table $tablename: $rct"); + + # create insert Statements + for (my $ttt = 0; $ttt < $rct; $ttt += $dumpspeed) { + # default beginning for INSERT-String + $insert = "INSERT INTO `$tablename` $fieldlist VALUES ("; + $first_insert = 0; + + # get rows (parts) + $sql_daten = "SELECT * FROM `$tablename` LIMIT ".$ttt.",".$dumpspeed.";"; + + eval { $sth = $dbh->prepare($sql_daten); + $sth->execute; + }; + if ($@) { + $e = $@; + $err = "Fatal error sending Query '".$sql_daten."' ! MySQL-Error: ".$e; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($e,""); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + while ( @ar = $sth->fetchrow) { + #Start the insert + if($first_insert == 0) { + $a = "\n$insert"; + } else { + $a = "\n("; + } + + # quote all values + foreach $inhalt(@ar) { $a .= $dbh->quote($inhalt).","; } + + # remove trailing ',' and add end-sql + $a = substr($a,0, length($a)-1).");"; + $sql_text .= $a; + + if($memory_limit > 0 && length($sql_text) > $memory_limit) { + ($filesize,$err) = DbRep_WriteToDumpFile($sql_text,$sql_file); + # Log3 ($name, 5, "DbRep $name - Memory limit '$memory_limit' exceeded. Wrote to '$sql_file'. Filesize: '".DbRep_byteOutput($filesize)."'"); + $sql_text = ""; + } + } + $sth->finish; + } + $sql_text .= "\n/*!40000 ALTER TABLE `$tablename` ENABLE KEYS */;\n"; + } + + # write sql commands to file + ($filesize,$err) = DbRep_WriteToDumpFile($sql_text,$sql_file); + $sql_text = ""; + + if ($db_tables{$tablename}{skip_data} == 0) { + Log3 ($name, 3, "DbRep $name - $rct records inserted (size of backupfile: ".DbRep_byteOutput($filesize).")") if($filesize); + $totalrecords += $rct; + } else { + Log3 ($name, 3, "DbRep $name - Dumping structure of $tablename (Type ".$db_tables{$tablename}{Engine}." ) (size of backupfile: ".DbRep_byteOutput($filesize).")"); + } + + } + } + + # end + DbRep_WriteToDumpFile("\nSET FOREIGN_KEY_CHECKS=1;\n",$sql_file); + ($filesize,$err) = DbRep_WriteToDumpFile($mysql_commentstring."EOB\n",$sql_file); + + # Datenbankverbindung schliessen + $sth->finish() if (defined $sth); + $dbh->disconnect(); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Dumpfile komprimieren wenn dumpCompress=1 + my $compress = AttrVal($name,"dumpCompress",0); + if($compress) { + # $err nicht auswerten -> wenn compress fehlerhaft wird unkomprimiertes dumpfile verwendet + ($err,$backupfile) = DbRep_dumpCompress($hash,$backupfile); + + my $fref = stat("$dump_path$backupfile"); + if ($fref =~ /ARRAY/) { + $filesize = (@{stat("$dump_path$backupfile")})[7]; + } else { + $filesize = (stat("$dump_path$backupfile"))[7]; + } + } + + # Dumpfile per FTP senden und versionieren + my ($ftperr,$ftpmsg,@ftpfd) = DbRep_sendftp($hash,$backupfile); + my $ftp = $ftperr?encode_base64($ftperr,""):$ftpmsg?encode_base64($ftpmsg,""):0; + my $ffd = join(", ", @ftpfd); + $ffd = $ffd?encode_base64($ffd,""):0; + + # alte Dumpfiles löschen + my @fd = DbRep_deldumpfiles($hash,$backupfile); + my $bfd = join(", ", @fd ); + $bfd = $bfd?encode_base64($bfd,""):0; + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + my $fsize = ''; + if($filesize) { + $fsize = DbRep_byteOutput($filesize); + $fsize = encode_base64($fsize,""); + } + + Log3 ($name, 3, "DbRep $name - Finished backup of database $dbname, total time used: ".sprintf("%.0f",$brt)." sec."); + +return "$name|$rt|''|$dump_path$backupfile|$drc|$drh|$fsize|$ftp|$bfd|$ffd"; +} + +#################################################################################################### +# nicht blockierende Dump-Routine für MySQL (serverSide) +#################################################################################################### +sub mysql_DoDumpServerSide($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dbname = $hash->{DATABASE}; + my $optimize_tables_beforedump = AttrVal($name, "optimizeTablesBeforeDump", 0); + my $dump_path_rem = AttrVal($name, "dumpDirRemote", "./"); + $dump_path_rem = $dump_path_rem."/" unless($dump_path_rem =~ m/\/$/); + my $ebd = AttrVal($name, "executeBeforeProc", undef); + my $ead = AttrVal($name, "executeAfterProc", undef); + my $table = "history"; + my ($dbh,$sth,$err,$db_MB_start,$db_MB_end,$drh); + my (%db_tables,@tablenames); + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + # Eigenschaften der vorhandenen Tabellen ermitteln (SHOW TABLE STATUS -> Rows sind nicht exakt !!) + my $value = 0; + my $query ="SHOW TABLE STATUS FROM `$dbname`"; + + Log3 ($name, 5, "DbRep $name - current query: $query "); + + Log3 ($name, 3, "DbRep $name - Searching for tables inside database $dbname...."); + + eval { $sth = $dbh->prepare($query); + $sth->execute; + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! MySQL-Error: ".$@); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + while ( $value = $sth->fetchrow_hashref()) { + # verbose 5 logging + Log3 ($name, 5, "DbRep $name - ......... Table definition found: ........."); + foreach my $tk (sort(keys(%$value))) { + Log3 ($name, 5, "DbRep $name - $tk: $value->{$tk}") if(defined($value->{$tk}) && $tk ne "Rows"); + } + Log3 ($name, 5, "DbRep $name - ......... Table definition END ............"); + + # check for old MySQL3-Syntax Type=xxx + if (defined $value->{Type}) { + # port old index type to index engine, so we can use the index Engine in the rest of the script + $value->{Engine} = $value->{Type}; + } + $db_tables{$value->{Name}} = $value; + + } + $sth->finish; + + @tablenames = sort(keys(%db_tables)); + + if (@tablenames < 1) { + $err = "There are no tables inside database $dbname ! It doesn't make sense to backup an empty database. Skipping this one."; + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($@,""); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + if($optimize_tables_beforedump) { + # Tabellen optimieren vor dem Dump + $hash->{HELPER}{DBTABLES} = \%db_tables; + ($err,$db_MB_start,$db_MB_end) = DbRep_mysqlOptimizeTables($hash,$dbh,@tablenames); + if ($err) { + $err = encode_base64($err,""); + return "$name|''|$err|''|''|''|''|''|''|''"; + } + } + + Log3 ($name, 3, "DbRep $name - Starting dump of database '$dbname', table '$table'"); + + # Startzeit ermitteln + my ($Sekunden, $Minuten, $Stunden, $Monatstag, $Monat, $Jahr, $Wochentag, $Jahrestag, $Sommerzeit) = localtime(time); + $Jahr += 1900; + $Monat += 1; + $Jahrestag += 1; + my $time_stamp = $Jahr."_".sprintf("%02d",$Monat)."_".sprintf("%02d",$Monatstag)."_".sprintf("%02d",$Stunden)."_".sprintf("%02d",$Minuten); + + my $bfile = $dbname."_".$table."_".$time_stamp.".csv"; + Log3 ($name, 5, "DbRep $name - Use Outfile: $dump_path_rem$bfile"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my $sql = "SELECT * FROM history INTO OUTFILE '$dump_path_rem$bfile' FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n'; "; + + eval {$sth = $dbh->prepare($sql); + $drh = $sth->execute(); + }; + + if ($@) { + # error bei sql-execute + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Dumpfile komprimieren wenn dumpCompress=1 + my $compress = AttrVal($name,"dumpCompress",0); + if($compress) { + # $err nicht auswerten -> wenn compress fehlerhaft wird unkomprimiertes dumpfile verwendet + ($err,$bfile) = DbRep_dumpCompress($hash,$bfile); + } + + # Größe Dumpfile ermitteln ("dumpDirRemote" muß auf "dumpDirLocal" gemountet sein) + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path_loc = AttrVal($name,"dumpDirLocal", $dump_path_def); + $dump_path_loc = $dump_path_loc."/" unless($dump_path_loc =~ m/\/$/); + + my $filesize; + my $fref = stat($dump_path_loc.$bfile); + if ($fref =~ /ARRAY/) { + $filesize = (@{stat($dump_path_loc.$bfile)})[7]; + } else { + $filesize = (stat($dump_path_loc.$bfile))[7]; + } + + Log3 ($name, 3, "DbRep $name - Number of exported datasets: $drh"); + Log3 ($name, 3, "DbRep $name - Size of backupfile: ".DbRep_byteOutput($filesize)) if($filesize); + + # Dumpfile per FTP senden und versionieren + my ($ftperr,$ftpmsg,@ftpfd) = DbRep_sendftp($hash,$bfile); + my $ftp = $ftperr?encode_base64($ftperr,""):$ftpmsg?encode_base64($ftpmsg,""):0; + my $ffd = join(", ", @ftpfd); + $ffd = $ffd?encode_base64($ffd,""):0; + + # alte Dumpfiles löschen + my @fd = DbRep_deldumpfiles($hash,$bfile); + my $bfd = join(", ", @fd ); + $bfd = $bfd?encode_base64($bfd,""):0; + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + my $fsize = ''; + if($filesize) { + $fsize = DbRep_byteOutput($filesize); + $fsize = encode_base64($fsize,""); + } + + $rt = $rt.",".$brt; + + Log3 ($name, 3, "DbRep $name - Finished backup of database $dbname - total time used: ".sprintf("%.0f",$brt)." seconds"); + +return "$name|$rt|''|$dump_path_rem$bfile|n.a.|$drh|$fsize|$ftp|$bfd|$ffd"; +} + +#################################################################################################### +# Dump-Routine SQLite +#################################################################################################### +sub DbRep_sqliteDoDump($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbname = $hash->{DATABASE}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path = AttrVal($name, "dumpDirLocal", $dump_path_def); + $dump_path = $dump_path."/" unless($dump_path =~ m/\/$/); + my $optimize_tables_beforedump = AttrVal($name, "optimizeTablesBeforeDump", 0); + my $ebd = AttrVal($name, "executeBeforeProc", undef); + my $ead = AttrVal($name, "executeAfterProc", undef); + my ($dbh,$err,$db_MB,$r,$query,$sth); + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + if($optimize_tables_beforedump) { + # Vacuum vor Dump + # Anfangsgröße ermitteln + $db_MB = (split(' ',qx(du -m $dbname)))[0] if ($^O =~ m/linux/i || $^O =~ m/unix/i); + Log3 ($name, 3, "DbRep $name - Size of database $dbname before optimize (MB): $db_MB"); + $query ="VACUUM"; + Log3 ($name, 5, "DbRep $name - current query: $query "); + + Log3 ($name, 3, "DbRep $name - VACUUM database $dbname...."); + eval {$sth = $dbh->prepare($query); + $r = $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! SQLite-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + # Endgröße ermitteln + $db_MB = (split(' ',qx(du -m $dbname)))[0] if ($^O =~ m/linux/i || $^O =~ m/unix/i); + Log3 ($name, 3, "DbRep $name - Size of database $dbname after optimize (MB): $db_MB"); + } + + $dbname = (split /[\/]/, $dbname)[-1]; + + Log3 ($name, 3, "DbRep $name - Starting dump of database '$dbname'"); + + # Startzeit ermitteln + my ($Sekunden, $Minuten, $Stunden, $Monatstag, $Monat, $Jahr, $Wochentag, $Jahrestag, $Sommerzeit) = localtime(time); + $Jahr += 1900; + $Monat += 1; + $Jahrestag += 1; + my $time_stamp = $Jahr."_".sprintf("%02d",$Monat)."_".sprintf("%02d",$Monatstag)."_".sprintf("%02d",$Stunden)."_".sprintf("%02d",$Minuten); + + $dbname = (split /\./, $dbname)[0]; + my $bfile = $dbname."_".$time_stamp.".sqlitebkp"; + Log3 ($name, 5, "DbRep $name - Use Outfile: $dump_path$bfile"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + eval { $dbh->sqlite_backup_to_file($dump_path.$bfile); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$err|''|''|''|''|''|''|''"; + } + + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Dumpfile komprimieren + my $compress = AttrVal($name,"dumpCompress",0); + if($compress) { + # $err nicht auswerten -> wenn compress fehlerhaft wird unkomprimiertes dumpfile verwendet + ($err,$bfile) = DbRep_dumpCompress($hash,$bfile); + } + + # Größe Dumpfile ermitteln + my @a = split(' ',qx(du $dump_path$bfile)) if ($^O =~ m/linux/i || $^O =~ m/unix/i); + + my $filesize = ($a[0])?($a[0]*1024):"n.a."; + my $fsize = DbRep_byteOutput($filesize); + Log3 ($name, 3, "DbRep $name - Size of backupfile: ".$fsize); + + # Dumpfile per FTP senden und versionieren + my ($ftperr,$ftpmsg,@ftpfd) = DbRep_sendftp($hash,$bfile); + my $ftp = $ftperr?encode_base64($ftperr,""):$ftpmsg?encode_base64($ftpmsg,""):0; + my $ffd = join(", ", @ftpfd); + $ffd = $ffd?encode_base64($ffd,""):0; + + # alte Dumpfiles löschen + my @fd = DbRep_deldumpfiles($hash,$bfile); + my $bfd = join(", ", @fd ); + $bfd = $bfd?encode_base64($bfd,""):0; + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $fsize = encode_base64($fsize,""); + + $rt = $rt.",".$brt; + + Log3 ($name, 3, "DbRep $name - Finished backup of database $dbname - total time used: ".sprintf("%.0f",$brt)." seconds"); + +return "$name|$rt|''|$dump_path$bfile|n.a.|n.a.|$fsize|$ftp|$bfd|$ffd"; +} + +#################################################################################################### +# Auswertungsroutine der nicht blockierenden DB-Funktion Dump +#################################################################################################### +sub DbRep_DumpDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $bt = $a[1]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[2]?decode_base64($a[2]):undef; + my $bfile = $a[3]; + my $drc = $a[4]; + my $drh = $a[5]; + my $fs = $a[6]?decode_base64($a[6]):undef; + my $ftp = $a[7]?decode_base64($a[7]):undef; + my $bfd = $a[8]?decode_base64($a[8]):undef; + my $ffd = $a[9]?decode_base64($a[9]):undef; + my $name = $hash->{NAME}; + my $erread; + + delete($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + delete($hash->{HELPER}{RUNNING_BCKPREST_SERVER}); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue($hash, "DumpFileCreated", $bfile); + ReadingsBulkUpdateValue($hash, "DumpFileCreatedSize", $fs); + ReadingsBulkUpdateValue($hash, "DumpFilesDeleted", $bfd); + ReadingsBulkUpdateValue($hash, "DumpRowsCurrent", $drc); + ReadingsBulkUpdateValue($hash, "DumpRowsHistory", $drh); + ReadingsBulkUpdateValue($hash, "FTP_Message", $ftp) if($ftp); + ReadingsBulkUpdateValue($hash, "FTP_DumpFilesDeleted", $ffd) if($ffd); + ReadingsBulkUpdateValue($hash, "background_processing_time", sprintf("%.4f",$brt)); + readingsEndUpdate($hash, 1); + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "dump"); + + my $state = $erread?$erread:"Database backup finished"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateTimeState($hash,undef,undef,$state); + readingsEndUpdate($hash, 1); + + Log3 ($name, 3, "DbRep $name - Database dump finished successfully. "); + +return; +} + +#################################################################################################### +# Dump-Routine SQLite +#################################################################################################### +sub DbRep_sqliteRepair($) { + my ($name) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $db = $hash->{DATABASE}; + my $dbname = (split /[\/]/, $db)[-1]; + my $dbpath = (split /$dbname/, $db)[0]; + my $dblogname = $dbloghash->{NAME}; + my $sqlfile = $dbpath."dump_all.sql"; + my ($c,$clog,$ret,$err); + + # Background-Startzeit + my $bst = [gettimeofday]; + + $c = "echo \".mode insert\n.output $sqlfile\n.dump\n.exit\" | sqlite3 $db; "; + $clog = $c; + $clog =~ s/\n/ /g; + Log3 ($name, 4, "DbRep $name - Systemcall: $clog"); + $ret = system qq($c); + if($ret) { + $err = "Error in step \"dump corrupt database\" - see logfile"; + $err = encode_base64($err,""); + return "$name|''|$err"; + } + + $c = "mv $db $db.corrupt"; + $clog = $c; + $clog =~ s/\n/ /g; + Log3 ($name, 4, "DbRep $name - Systemcall: $clog"); + $ret = system qq($c); + if($ret) { + $err = "Error in step \"move atabase to corrupt-db\" - see logfile"; + $err = encode_base64($err,""); + return "$name|''|$err"; + } + + $c = "echo \".read $sqlfile\n.exit\" | sqlite3 $db;"; + $clog = $c; + $clog =~ s/\n/ /g; + Log3 ($name, 4, "DbRep $name - Systemcall: $clog"); + $ret = system qq($c); + if($ret) { + $err = "Error in step \"read dump to new database\" - see logfile"; + $err = encode_base64($err,""); + return "$name|''|$err"; + } + + $c = "rm $sqlfile"; + $clog = $c; + $clog =~ s/\n/ /g; + Log3 ($name, 4, "DbRep $name - Systemcall: $clog"); + $ret = system qq($c); + if($ret) { + $err = "Error in step \"delete $sqlfile\" - see logfile"; + $err = encode_base64($err,""); + return "$name|''|$err"; + } + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + +return "$name|$brt|0"; +} + +#################################################################################################### +# Auswertungsroutine der nicht blockierenden DB-Funktion Dump +#################################################################################################### +sub DbRep_RepairDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $brt = $a[1]; + my $err = $a[2]?decode_base64($a[2]):undef; + my $dbloghash = $hash->{dbloghash}; + my $name = $hash->{NAME}; + my $erread; + + delete($hash->{HELPER}{RUNNING_REPAIR}); + + # Datenbankverbindung in DbLog wieder öffenen + my $dbl = $dbloghash->{NAME}; + CommandSet(undef,"$dbl reopen"); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue($hash, "background_processing_time", sprintf("%.4f",$brt)); + readingsEndUpdate($hash, 1); + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "repair"); + + my $state = $erread?$erread:"Repair finished $hash->{DATABASE}"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateTimeState($hash,undef,undef,$state); + readingsEndUpdate($hash, 1); + + Log3 ($name, 3, "DbRep $name - Database repair $hash->{DATABASE} finished. - total time used: ".sprintf("%.0f",$brt)." seconds."); + +return; +} + +#################################################################################################### +# Restore SQLite +#################################################################################################### +sub DbRep_sqliteRestore ($) { + my ($string) = @_; + my ($name,$bfile) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path = AttrVal($name, "dumpDirLocal", $dump_path_def); + $dump_path = $dump_path."/" unless($dump_path =~ m/\/$/); + my $ebd = AttrVal($name, "executeBeforeProc", undef); + my $ead = AttrVal($name, "executeAfterProc", undef); + my ($dbh,$err,$dbname); + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$err|''|''"; + } + + eval { $dbname = $dbh->sqlite_db_filename(); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + $dbname = (split /[\/]/, $dbname)[-1]; + + # Dumpfile dekomprimieren wenn gzip + if($bfile =~ m/.*.gzip$/) { + ($err,$bfile) = DbRep_dumpUnCompress($hash,$bfile); + if ($err) { + $err = encode_base64($err,""); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + } + + Log3 ($name, 3, "DbRep $name - Starting restore of database '$dbname'"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + eval { $dbh->sqlite_backup_from_file($dump_path.$bfile); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + Log3 ($name, 3, "DbRep $name - Restore of $dump_path$bfile into '$dbname' finished - total time used: ".sprintf("%.0f",$brt)." seconds."); + +return "$name|$rt|''|$dump_path$bfile|n.a."; +} + +#################################################################################################### +# Restore MySQL (serverSide) +#################################################################################################### +sub mysql_RestoreServerSide($) { + my ($string) = @_; + my ($name, $bfile) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dbname = $hash->{DATABASE}; + my $dump_path_rem = AttrVal($name, "dumpDirRemote", "./"); + $dump_path_rem = $dump_path_rem."/" unless($dump_path_rem =~ m/\/$/); + my $table = "history"; + my ($dbh,$sth,$err,$drh); + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|$err|''|''"; + } + + # Dumpfile dekomprimieren wenn gzip + if($bfile =~ m/.*.gzip$/) { + ($err,$bfile) = DbRep_dumpUnCompress($hash,$bfile); + if ($err) { + $err = encode_base64($err,""); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + } + + Log3 ($name, 3, "DbRep $name - Starting restore of database '$dbname', table '$table'."); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my $sql = "LOAD DATA CONCURRENT INFILE '$dump_path_rem$bfile' IGNORE INTO TABLE $table FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n'; "; + + eval {$sth = $dbh->prepare($sql); + $drh = $sth->execute(); + }; + + if ($@) { + # error bei sql-execute + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + Log3 ($name, 3, "DbRep $name - Restore of $dump_path_rem$bfile into '$dbname', '$table' finished - total time used: ".sprintf("%.0f",$brt)." seconds."); + +return "$name|$rt|''|$dump_path_rem$bfile|n.a."; +} + +#################################################################################################### +# Restore MySQL (ClientSide) +#################################################################################################### +sub mysql_RestoreClientSide($) { + my ($string) = @_; + my ($name, $bfile) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $dbname = $hash->{DATABASE}; + my $i_max = AttrVal($name, "dumpMemlimit", 100000); # max. Anzahl der Blockinserts + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path = AttrVal($name, "dumpDirLocal", $dump_path_def); + $dump_path = $dump_path."/" if($dump_path !~ /.*\/$/); + my ($dbh,$err,$v1,$v2,$e); + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung mit DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1 });}; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 1, "DbRep $name - $e"); + return "$name|''|$err|''|''"; + } + + # maximal mögliche Packetgröße ermitteln (in Bits) -> Umrechnen in max. Zeichen + my @row_ary; + my $sql = "show variables like 'max_allowed_packet'"; + eval {@row_ary = $dbh->selectrow_array($sql);}; + my $max_packets = $row_ary[1]; # Bits + $i_max = ($max_packets/8)-500; # Characters mit Sicherheitszuschlag + + # Dumpfile dekomprimieren wenn gzip + if($bfile =~ m/.*.gzip$/) { + ($err,$bfile) = DbRep_dumpUnCompress($hash,$bfile); + if ($err) { + $err = encode_base64($err,""); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + } + + if(!open(FH, "<$dump_path$bfile")) { + $err = encode_base64("could not open ".$dump_path.$bfile.": ".$!,""); + return "$name|''|''|$err|''"; + } + + Log3 ($name, 3, "DbRep $name - Restore of database '$dbname' started. Sourcefile: $dump_path$bfile"); + Log3 ($name, 3, "DbRep $name - Max packet lenght of insert statement: $i_max"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my $nc = 0; # Insert Zähler current + my $nh = 0; # Insert Zähler history + my $n = 0; # Insert Zähler + my $i = 0; # Array Zähler + my $tmp = ''; + my $line = ''; + my $base_query = ''; + my $query = ''; + + while() { + $tmp = $_; + chomp($tmp); + if(!$tmp || substr($tmp,0,2) eq "--") { + next; + } + $line .= $tmp; + + if(substr($line,-1) eq ";") { + if($line !~ /^INSERT INTO.*$/) { + eval {$dbh->do($line); + }; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 1, "DbRep $name - last query: $line"); + Log3 ($name, 1, "DbRep $name - $e"); + close(FH); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + $line = ''; + next; + } + + if(!$base_query) { + $line =~ /INSERT INTO (.*) VALUES \((.*)\);/; + $v1 = $1; + $v2 = $2; + $base_query = qq{INSERT INTO $v1 VALUES }; + $query = $base_query; + $nc++ if($base_query =~ /INSERT INTO `current`.*/); + $nh++ if($base_query =~ /INSERT INTO `history`.*/); + $query .= "," if($i); + $query .= "(".$v2.")"; + $i++; + } else { + $line =~ /INSERT INTO (.*) VALUES \((.*)\);/; + $v1 = $1; + $v2 = $2; + my $ln = qq{INSERT INTO $v1 VALUES }; + if($base_query eq $ln) { + $nc++ if($base_query =~ /INSERT INTO `current`.*/); + $nh++ if($base_query =~ /INSERT INTO `history`.*/); + $query .= "," if($i); + $query .= "(".$v2.")"; + $i++; + } else { + $query = $query.";"; + eval {$dbh->do($query); + }; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 1, "DbRep $name - last query: $query"); + Log3 ($name, 1, "DbRep $name - $e"); + close(FH); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + $i = 0; + $line =~ /INSERT INTO (.*) VALUES \((.*)\);/; + $v1 = $1; + $v2 = $2; + $base_query = qq{INSERT INTO $v1 VALUES }; + $query = $base_query; + $query .= "(".$v2.")"; + $nc++ if($base_query =~ /INSERT INTO `current`.*/); + $nh++ if($base_query =~ /INSERT INTO `history`.*/); + $i++; + } + } + + if(length($query) >= $i_max) { + $query = $query.";"; + eval {$dbh->do($query); + }; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 1, "DbRep $name - last query: $query"); + Log3 ($name, 1, "DbRep $name - $e"); + close(FH); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + $i = 0; + $query = ''; + $base_query = ''; + } + $line = ''; + } + } + + eval { $dbh->do($query) if($i); + }; + if ($@) { + $e = $@; + $err = encode_base64($e,""); + Log3 ($name, 1, "DbRep $name - last query: $query"); + Log3 ($name, 1, "DbRep $name - $e"); + close(FH); + $dbh->disconnect; + return "$name|''|$err|''|''"; + } + $dbh->disconnect; + close(FH); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + Log3 ($name, 3, "DbRep $name - Restore of '$dbname' finished - inserted history: $nh, inserted curent: $nc, time used: ".sprintf("%.0f",$brt)." seconds."); + +return "$name|$rt|''|$dump_path$bfile|$nh|$nc"; +} + +#################################################################################################### +# Auswertungsroutine Restore +#################################################################################################### +sub DbRep_restoreDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $bt = $a[1]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[2]?decode_base64($a[2]):undef; + my $bfile = $a[3]; + my $drh = $a[4]; + my $drc = $a[5]; + my $name = $hash->{NAME}; + my $erread; + + delete($hash->{HELPER}{RUNNING_RESTORE}); + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return; + } + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue($hash, "RestoreRowsHistory", $drh) if($drh); + ReadingsBulkUpdateValue($hash, "RestoreRowsCurrent", $drc) if($drc); + readingsEndUpdate($hash, 1); + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "restore"); + + my $state = $erread?$erread:"Restore of $bfile finished"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateTimeState($hash,$brt,undef,$state); + readingsEndUpdate($hash, 1); + + Log3 ($name, 3, "DbRep $name - Database restore finished successfully. "); + +return; +} + +#################################################################################################### +# Übertragung Datensätze in weitere DB +#################################################################################################### +sub DbRep_syncStandby($) { + my ($string) = @_; + my ($name,$device,$reading,$runtime_string_first,$runtime_string_next,$ts,$stbyname) = split("\\§", $string); + my $hash = $defs{$name}; + my $table = "history"; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my ($dbh,$dbhstby,$err,$sql,$irows,$irowdone); + # Quell-DB + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + # Standby-DB + my $stbyhash = $defs{$stbyname}; + my $stbyconn = $stbyhash->{dbconn}; + my $stbyuser = $stbyhash->{dbuser}; + my $stbypasswd = $attr{"sec$stbyname"}{secret}; + my $stbyutf8 = defined($stbyhash->{UTF8})?$stbyhash->{UTF8}:0; + + # Background-Startzeit + my $bst = [gettimeofday]; + + # Verbindung zur Quell-DB + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, mysql_enable_utf8 => $utf8 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err"; + } + + # Verbindung zur Standby-DB + eval {$dbhstby = DBI->connect("dbi:$stbyconn", $stbyuser, $stbypasswd, { PrintError => 0, RaiseError => 1, AutoCommit => 1, mysql_enable_utf8 => $stbyutf8 });}; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + return "$name|''|''|$err"; + } + + # ist Zeiteingrenzung und/oder Aggregation gesetzt ? (wenn ja -> "?" in SQL sonst undef) + my ($IsTimeSet,$IsAggrSet) = DbRep_checktimeaggr($hash); + Log3 ($name, 5, "DbRep $name - IsTimeSet: $IsTimeSet, IsAggrSet: $IsAggrSet"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my ($sth,$old,$new); + eval { $dbh->begin_work() if($dbh->{AutoCommit}); }; # Transaktion wenn gewünscht und autocommit ein + if ($@) { + Log3($name, 2, "DbRep $name -> Error start transaction - $@"); + } + + # Timestampstring to Array + my @ts = split("\\|", $ts); + Log3 ($name, 5, "DbRep $name - Timestamp-Array: \n@ts"); + + # DB-Abfrage zeilenweise für jeden Array-Eintrag + $irows = 0; + $irowdone = 0; + my $selspec = "TIMESTAMP,DEVICE,TYPE,EVENT,READING,VALUE,UNIT"; + my $addon = ''; + foreach my $row (@ts) { + my @a = split("#", $row); + my $runtime_string = $a[0]; + my $runtime_string_first = $a[1]; + my $runtime_string_next = $a[2]; + + if ($IsTimeSet || $IsAggrSet) { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,"'$runtime_string_first'","'$runtime_string_next'",$addon); + } else { + $sql = DbRep_createSelectSql($hash,"history",$selspec,$device,$reading,undef,undef,$addon); + } + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + eval{ $sth = $dbh->prepare($sql); + $sth->execute(); + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->disconnect; + return "$name|''|''|$err"; + } + + no warnings 'uninitialized'; + # DATE _ESC_ TIME _ESC_ DEVICE _ESC_ TYPE _ESC_ EVENT _ESC_ READING _ESC_ VALUE _ESC_ UNIT + my @row_array = map { ($_->[0] =~ s/ /_ESC_/r)."_ESC_".$_->[1]."_ESC_".$_->[2]."_ESC_".$_->[3]."_ESC_".$_->[4]."_ESC_".$_->[5]."_ESC_".$_->[6] } @{$sth->fetchall_arrayref()}; + use warnings; + + (undef,$irowdone,$err) = DbRep_WriteToDB($name,$dbhstby,$stbyhash,"0",@row_array) if(@row_array); + if ($err) { + Log3 ($name, 2, "DbRep $name - $err"); + $err = encode_base64($err,""); + $dbh->disconnect; + $dbhstby->disconnect(); + return "$name|''|''|$err"; + } + $irows += $irowdone; + } + + $dbh->disconnect(); + $dbhstby->disconnect(); + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + # Background-Laufzeit ermitteln + my $brt = tv_interval($bst); + + $rt = $rt.",".$brt; + + return "$name|$irows|$rt|0"; +} + +#################################################################################################### +# Auswertungsroutine Übertragung Datensätze in weitere DB +#################################################################################################### +sub DbRep_syncStandbyDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $hash = $defs{$a[0]}; + my $name = $hash->{NAME}; + my $irows = $a[1]; + my $bt = $a[2]; + my ($rt,$brt) = split(",", $bt); + my $err = $a[3]?decode_base64($a[3]):undef; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + delete($hash->{HELPER}{RUNNING_PID}); + Log3 ($name, 4, "DbRep $name -> BlockingCall change_Done finished"); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue ($hash, "number_lines_inserted_Standby", $irows); + ReadingsBulkUpdateTimeState($hash,$brt,$rt,"done"); + readingsEndUpdate($hash, 1); + + # Befehl nach Procedure ausführen + my $erread = DbRep_afterproc($hash, "syncStandby"); + + delete($hash->{HELPER}{RUNNING_PID}); + +return; +} + +#################################################################################################### +# reduceLog - Historische Werte ausduennen non-blocking > Forum #41089 +# +# $ots - reduce Logs älter als: Attribut "timeOlderThan" oder "timestamp_begin" +# $nts - reduce Logs neuer als: Attribut "timeDiffToNow" oder "timestamp_end" +#################################################################################################### +sub DbRep_reduceLog($) { + my ($string) = @_; + my ($name,$nts,$ots) = split("\\|", $string); + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbmodel = $dbloghash->{MODEL}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my @a = @{$hash->{HELPER}{REDUCELOG}}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + delete $hash->{HELPER}{REDUCELOG}; + my ($ret,$row,$filter,$exclude,$c,$day,$hour,$lastHour,$updDate,$updHour,$average,$processingDay,$lastUpdH,%hourlyKnown,%averageHash,@excludeRegex,@dayRows,@averageUpd,@averageUpdD); + my ($startTime,$currentHour,$currentDay,$deletedCount,$updateCount,$sum,$rowCount,$excludeCount) = (time(),99,0,0,0,0,0,0); + my ($dbh,$err,$brt); + + Log3 ($name, 5, "DbRep $name -> Start DbLog_reduceLog"); + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + return "$name|''|$err|''"; + } + + if ($a[-1] =~ /^EXCLUDE=(.+:.+)+/i) { + ($filter) = $a[-1] =~ /^EXCLUDE=(.+)/i; + @excludeRegex = split(',',$filter); + } elsif ($a[-1] =~ /^INCLUDE=.+:.+$/i) { + $filter = 1; + } + if (defined($a[2])) { + $average = ($a[2] =~ /average=day/i) ? "AVERAGE=DAY" : ($a[2] =~ /average/i) ? "AVERAGE=HOUR" : 0; + } + + Log3 ($name, 3, "DbRep $name - reduce data older than: $ots, newer than: $nts"); + Log3 ($name, 3, "DbRep $name - reduceLog requested with options: " + .(($average) ? "$average" : '') + .(($average && $filter) ? ", " : '').(($filter) ? uc((split('=',$a[-1]))[0]).'='.(split('=',$a[-1]))[1] : '')); + + if ($ots) { + my ($sth_del, $sth_upd, $sth_delD, $sth_updD, $sth_get); + eval { $sth_del = $dbh->prepare_cached("DELETE FROM history WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?) AND (VALUE=?)"); + $sth_upd = $dbh->prepare_cached("UPDATE history SET TIMESTAMP=?, EVENT=?, VALUE=? WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?) AND (VALUE=?)"); + $sth_delD = $dbh->prepare_cached("DELETE FROM history WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?)"); + $sth_updD = $dbh->prepare_cached("UPDATE history SET TIMESTAMP=?, EVENT=?, VALUE=? WHERE (DEVICE=?) AND (READING=?) AND (TIMESTAMP=?)"); + $sth_get = $dbh->prepare("SELECT TIMESTAMP,DEVICE,'',READING,VALUE FROM history WHERE " + .($a[-1] =~ /^INCLUDE=(.+):(.+)$/i ? "DEVICE like '$1' AND READING like '$2' AND " : '') + ."TIMESTAMP < '$ots'".($nts?" AND TIMESTAMP >= '$nts' ":" ")."ORDER BY TIMESTAMP ASC"); # '' was EVENT, no longer in use + }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + return "$name|''|$err|''"; + } + + eval { $sth_get->execute(); }; + if ($@) { + $err = encode_base64($@,""); + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + return "$name|''|$err|''"; + } + + do { + $row = $sth_get->fetchrow_arrayref || ['0000-00-00 00:00:00','D','','R','V']; # || execute last-day dummy + $ret = 1; + ($day,$hour) = $row->[0] =~ /-(\d{2})\s(\d{2}):/; + $rowCount++ if($day != 00); + if ($day != $currentDay) { + if ($currentDay) { # false on first executed day + if (scalar @dayRows) { + ($lastHour) = $dayRows[-1]->[0] =~ /(.*\d+\s\d{2}):/; + $c = 0; + for my $delRow (@dayRows) { + $c++ if($day != 00 || $delRow->[0] !~ /$lastHour/); + } + if($c) { + $deletedCount += $c; + Log3 ($name, 3, "DbRep $name - reduceLog deleting $c records of day: $processingDay"); + $dbh->{RaiseError} = 1; + $dbh->{PrintError} = 0; + eval {$dbh->begin_work() if($dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + eval { + my $i = 0; + my $k = 1; + my $th = ($#dayRows <= 2000)?100:($#dayRows <= 30000)?1000:10000; + for my $delRow (@dayRows) { + if($day != 00 || $delRow->[0] !~ /$lastHour/) { + Log3 ($name, 4, "DbRep $name - DELETE FROM history WHERE (DEVICE=$delRow->[1]) AND (READING=$delRow->[3]) AND (TIMESTAMP=$delRow->[0]) AND (VALUE=$delRow->[4])"); + $sth_del->execute(($delRow->[1], $delRow->[3], $delRow->[0], $delRow->[4])); + $i++; + if($i == $th) { + my $prog = $k * $i; + Log3 ($name, 3, "DbRep $name - reduceLog deletion progress of day: $processingDay is: $prog"); + $i = 0; + $k++; + } + } + } + }; + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - reduceLog ! FAILED ! for day $processingDay: $err"); + eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + $ret = 0; + } else { + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + } + $dbh->{RaiseError} = 0; + $dbh->{PrintError} = 1; + } + @dayRows = (); + } + + if ($ret && defined($a[3]) && $a[3] =~ /average/i) { + $dbh->{RaiseError} = 1; + $dbh->{PrintError} = 0; + eval {$dbh->begin_work() if($dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + eval { + push(@averageUpd, {%hourlyKnown}) if($day != 00); + + $c = 0; + for my $hourHash (@averageUpd) { # Only count for logging... + for my $hourKey (keys %$hourHash) { + $c++ if ($hourHash->{$hourKey}->[0] && scalar(@{$hourHash->{$hourKey}->[4]}) > 1); + } + } + $updateCount += $c; + Log3 ($name, 3, "DbRep $name - reduceLog (hourly-average) updating $c records of day: $processingDay") if($c); # else only push to @averageUpdD + + my $i = 0; + my $k = 1; + my $th = ($c <= 2000)?100:($c <= 30000)?1000:10000; + for my $hourHash (@averageUpd) { + for my $hourKey (keys %$hourHash) { + if ($hourHash->{$hourKey}->[0]) { # true if reading is a number + ($updDate,$updHour) = $hourHash->{$hourKey}->[0] =~ /(.*\d+)\s(\d{2}):/; + if (scalar(@{$hourHash->{$hourKey}->[4]}) > 1) { # true if reading has multiple records this hour + for (@{$hourHash->{$hourKey}->[4]}) { $sum += $_; } + $average = sprintf('%.3f', $sum/scalar(@{$hourHash->{$hourKey}->[4]}) ); + $sum = 0; + Log3 ($name, 4, "DbRep $name - UPDATE history SET TIMESTAMP=$updDate $updHour:30:00, EVENT='rl_av_h', VALUE=$average WHERE DEVICE=$hourHash->{$hourKey}->[1] AND READING=$hourHash->{$hourKey}->[3] AND TIMESTAMP=$hourHash->{$hourKey}->[0] AND VALUE=$hourHash->{$hourKey}->[4]->[0]"); + $sth_upd->execute(("$updDate $updHour:30:00", 'rl_av_h', $average, $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $hourHash->{$hourKey}->[0], $hourHash->{$hourKey}->[4]->[0])); + + $i++; + if($i == $th) { + my $prog = $k * $i; + Log3 ($name, 3, "DbRep $name - reduceLog (hourly-average) updating progress of day: $processingDay is: $prog"); + $i = 0; + $k++; + } + push(@averageUpdD, ["$updDate $updHour:30:00", 'rl_av_h', $average, $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $updDate]) if (defined($a[3]) && $a[3] =~ /average=day/i); + } else { + push(@averageUpdD, [$hourHash->{$hourKey}->[0], $hourHash->{$hourKey}->[2], $hourHash->{$hourKey}->[4]->[0], $hourHash->{$hourKey}->[1], $hourHash->{$hourKey}->[3], $updDate]) if (defined($a[3]) && $a[3] =~ /average=day/i); + } + } + } + } + }; + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - reduceLog average=hour ! FAILED ! for day $processingDay: $err"); + eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + @averageUpdD = (); + } else { + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + } + $dbh->{RaiseError} = 0; + $dbh->{PrintError} = 1; + @averageUpd = (); + } + + if (defined($a[3]) && $a[3] =~ /average=day/i && scalar(@averageUpdD) && $day != 00) { + $dbh->{RaiseError} = 1; + $dbh->{PrintError} = 0; + eval {$dbh->begin_work() if($dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + eval { + for (@averageUpdD) { + push(@{$averageHash{$_->[3].$_->[4]}->{tedr}}, [$_->[0], $_->[1], $_->[3], $_->[4]]); + $averageHash{$_->[3].$_->[4]}->{sum} += $_->[2]; + $averageHash{$_->[3].$_->[4]}->{date} = $_->[5]; + } + + $c = 0; + for (keys %averageHash) { + if(scalar @{$averageHash{$_}->{tedr}} == 1) { + delete $averageHash{$_}; + } else { + $c += (scalar(@{$averageHash{$_}->{tedr}}) - 1); + } + } + $deletedCount += $c; + $updateCount += keys(%averageHash); + + my ($id,$iu) = 0; + my ($kd,$ku) = 1; + my $thd = ($c <= 2000)?100:($c <= 30000)?1000:10000; + my $thu = ((keys %averageHash) <= 2000)?100:((keys %averageHash) <= 30000)?1000:10000; + Log3 ($name, 3, "DbRep $name - reduceLog (daily-average) updating ".(keys %averageHash).", deleting $c records of day: $processingDay") if(keys %averageHash); + for my $reading (keys %averageHash) { + $average = sprintf('%.3f', $averageHash{$reading}->{sum}/scalar(@{$averageHash{$reading}->{tedr}})); + $lastUpdH = pop @{$averageHash{$reading}->{tedr}}; + for (@{$averageHash{$reading}->{tedr}}) { + Log3 ($name, 5, "DbRep $name - DELETE FROM history WHERE DEVICE='$_->[2]' AND READING='$_->[3]' AND TIMESTAMP='$_->[0]'"); + $sth_delD->execute(($_->[2], $_->[3], $_->[0])); + + $id++; + if($id == $thd) { + my $prog = $kd * $id; + Log3 ($name, 3, "DbRep $name - reduceLog (daily-average) deleting progress of day: $processingDay is: $prog"); + $id = 0; + $kd++; + } + } + Log3 ($name, 4, "DbRep $name - UPDATE history SET TIMESTAMP=$averageHash{$reading}->{date} 12:00:00, EVENT='rl_av_d', VALUE=$average WHERE (DEVICE=$lastUpdH->[2]) AND (READING=$lastUpdH->[3]) AND (TIMESTAMP=$lastUpdH->[0])"); + $sth_updD->execute(($averageHash{$reading}->{date}." 12:00:00", 'rl_av_d', $average, $lastUpdH->[2], $lastUpdH->[3], $lastUpdH->[0])); + + $iu++; + if($iu == $thu) { + my $prog = $ku * $id; + Log3 ($name, 3, "DbRep $name - reduceLog (daily-average) updating progress of day: $processingDay is: $prog"); + $iu = 0; + $ku++; + } + } + }; + if ($@) { + Log3 ($name, 3, "DbRep $name - reduceLog average=day ! FAILED ! for day $processingDay"); + eval {$dbh->rollback() if(!$dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + } else { + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + if ($@) { + Log3 ($name, 2, "DbRep $name - DbRep_reduceLog - $@"); + } + } + $dbh->{RaiseError} = 0; + $dbh->{PrintError} = 1; + } + %averageHash = (); + %hourlyKnown = (); + @averageUpd = (); + @averageUpdD = (); + $currentHour = 99; + } + $currentDay = $day; + } + + if ($hour != $currentHour) { # forget records from last hour, but remember these for average + if (defined($a[3]) && $a[3] =~ /average/i && keys(%hourlyKnown)) { + push(@averageUpd, {%hourlyKnown}); + } + %hourlyKnown = (); + $currentHour = $hour; + } + if (defined $hourlyKnown{$row->[1].$row->[3]}) { # remember first readings for device per h, other can be deleted + push(@dayRows, [@$row]); + if (defined($a[3]) && $a[3] =~ /average/i && defined($row->[4]) && $row->[4] =~ /^-?(?:\d+(?:\.\d*)?|\.\d+)$/ && $hourlyKnown{$row->[1].$row->[3]}->[0]) { + if ($hourlyKnown{$row->[1].$row->[3]}->[0]) { + push(@{$hourlyKnown{$row->[1].$row->[3]}->[4]}, $row->[4]); + } + } + } else { + $exclude = 0; + for (@excludeRegex) { + $exclude = 1 if("$row->[1]:$row->[3]" =~ /^$_$/); + } + if ($exclude) { + $excludeCount++ if($day != 00); + } else { + $hourlyKnown{$row->[1].$row->[3]} = (defined($row->[4]) && $row->[4] =~ /^-?(?:\d+(?:\.\d*)?|\.\d+)$/) ? [$row->[0],$row->[1],$row->[2],$row->[3],[$row->[4]]] : [0]; + } + } + $processingDay = (split(' ',$row->[0]))[0]; + + } while( $day != 00 ); + + $brt = sprintf('%.2f',time() - $startTime); + my $result = "Rows processed: $rowCount, deleted: $deletedCount" + .((defined($a[3]) && $a[3] =~ /average/i)? ", updated: $updateCount" : '') + .(($excludeCount)? ", excluded: $excludeCount" : ''); + Log3 ($name, 3, "DbRep $name - reduceLog finished. $result"); + $ret = $result; + $ret = "reduceLog finished. $result"; + } else { + $err = "reduceLog needs at least one of attributes \"timeOlderThan\", \"timeDiffToNow\", \"timestamp_begin\" or \"timestamp_end\" to be set"; + Log3 ($name, 2, "DbRep $name - ERROR - $err"); + $err = encode_base64($err,""); + return "$name|''|$err|''"; + } + + $dbh->disconnect(); + $ret = encode_base64($ret,""); + Log3 ($name, 5, "DbRep $name -> DbRep_reduceLogNbl finished"); + +return "$name|$ret|0|$brt"; +} + +#################################################################################################### +# reduceLog non-blocking Rückkehrfunktion +#################################################################################################### +sub DbRep_reduceLogDone($) { + my ($string) = @_; + my @a = split("\\|",$string); + my $name = $a[0]; + my $hash = $defs{$name}; + my $ret = decode_base64($a[1]); + my $err = decode_base64($a[2]) if ($a[2]); + my $brt = $a[3]; + my $dbloghash = $hash->{dbloghash}; + my $erread; + + delete $hash->{HELPER}{RUNNING_REDUCELOG}; + + if ($err) { + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return; + } + + # only for this block because of warnings if details of readings are not set + no warnings 'uninitialized'; + + readingsBeginUpdate($hash); + ReadingsBulkUpdateValue($hash, "background_processing_time", sprintf("%.4f",$brt)); + ReadingsBulkUpdateValue($hash, "reduceLogState", $ret); + readingsEndUpdate($hash, 1); + + # Befehl nach Procedure ausführen + $erread = DbRep_afterproc($hash, "reduceLog"); + + my $state = $erread?$erread:"reduceLog of $hash->{DATABASE} finished"; + readingsBeginUpdate($hash); + ReadingsBulkUpdateTimeState($hash,undef,undef,$state); + readingsEndUpdate($hash, 1); + + use warnings; + +return; +} + +#################################################################################################### +# Abbruchroutine Timeout reduceLog +#################################################################################################### +sub DbRep_reduceLogAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + my $dbh = $hash->{DBH}; + my $erread; + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name - BlockingCall $hash->{HELPER}{RUNNING_REDUCELOG}{fn} pid:$hash->{HELPER}{RUNNING_REDUCELOG}{pid} $cause") if($hash->{HELPER}{RUNNING_REDUCELOG}); + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + $erread = DbRep_afterproc($hash, "reduceLog"); + $erread = ", ".(split("but", $erread))[1] if($erread); + + my $state = $cause.$erread; + $dbh->disconnect() if(defined($dbh)); + ReadingsSingleUpdateValue ($hash, "state", $state, 1); + + Log3 ($name, 2, "DbRep $name - Database reduceLog aborted due to \"$cause\" "); + + delete($hash->{HELPER}{RUNNING_REDUCELOG}); + +return; +} + +#################################################################################################### +# Abbruchroutine Timeout Restore +#################################################################################################### +sub DbRep_restoreAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + my $dbh = $hash->{DBH}; + my $erread; + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name - BlockingCall $hash->{HELPER}{RUNNING_RESTORE}{fn} pid:$hash->{HELPER}{RUNNING_RESTORE}{pid} $cause") if($hash->{HELPER}{RUNNING_RESTORE}); + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + $erread = DbRep_afterproc($hash, "restore"); + $erread = ", ".(split("but", $erread))[1] if($erread); + + my $state = $cause.$erread; + $dbh->disconnect() if(defined($dbh)); + ReadingsSingleUpdateValue ($hash, "state", $state, 1); + + Log3 ($name, 2, "DbRep $name - Database restore aborted due to \"$cause\" "); + + delete($hash->{HELPER}{RUNNING_RESTORE}); + +return; +} + +#################################################################################################### +# Abbruchroutine Timeout DB-Abfrage +#################################################################################################### +sub DbRep_ParseAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + my $dbh = $hash->{DBH}; + my $erread; + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name -> BlockingCall $hash->{HELPER}{RUNNING_PID}{fn} pid:$hash->{HELPER}{RUNNING_PID}{pid} $cause"); + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + $erread = DbRep_afterproc($hash, "command"); + $erread = ", ".(split("but", $erread))[1] if($erread); + + $dbh->disconnect() if(defined($dbh)); + ReadingsSingleUpdateValue ($hash,"state",$cause, 1); + + delete($hash->{HELPER}{RUNNING_PID}); +return; +} + +#################################################################################################### +# Abbruchroutine Timeout DB-Dump +#################################################################################################### +sub DbRep_DumpAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + my $dbh = $hash->{DBH}; + my ($erread); + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name - BlockingCall $hash->{HELPER}{RUNNING_BACKUP_CLIENT}{fn} pid:$hash->{HELPER}{RUNNING_BACKUP_CLIENT}{pid} $cause") if($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + Log3 ($name, 1, "DbRep $name - BlockingCall $hash->{HELPER}{RUNNING_BCKPREST_SERVER}{fn} pid:$hash->{HELPER}{RUNNING_BCKPREST_SERVER}{pid} $cause") if($hash->{HELPER}{RUNNING_BCKPREST_SERVER}); + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + $erread = DbRep_afterproc($hash, "dump"); + $erread = ", ".(split("but", $erread))[1] if($erread); + + my $state = $cause.$erread; + $dbh->disconnect() if(defined($dbh)); + ReadingsSingleUpdateValue ($hash, "state", $state, 1); + + Log3 ($name, 2, "DbRep $name - Database dump aborted due to \"$cause\" "); + + delete($hash->{HELPER}{RUNNING_BACKUP_CLIENT}); + delete($hash->{HELPER}{RUNNING_BCKPREST_SERVER}); +return; +} + +#################################################################################################### +# Abbruchroutine Timeout DB-Abfrage +#################################################################################################### +sub DbRep_OptimizeAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + my $dbh = $hash->{DBH}; + my ($erread); + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name -> BlockingCall $hash->{HELPER}{RUNNING_OPTIMIZE}}{fn} pid:$hash->{HELPER}{RUNNING_OPTIMIZE}{pid} $cause"); + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + $erread = DbRep_afterproc($hash, "optimize"); + $erread = ", ".(split("but", $erread))[1] if($erread); + + my $state = $cause.$erread; + $dbh->disconnect() if(defined($dbh)); + ReadingsSingleUpdateValue ($hash, "state", $state, 1); + + Log3 ($name, 2, "DbRep $name - Database optimize aborted due to \"$cause\" "); + + delete($hash->{HELPER}{RUNNING_OPTIMIZE}); +return; +} + +#################################################################################################### +# Abbruchroutine Repair SQlite +#################################################################################################### +sub DbRep_RepairAborted(@) { + my ($hash,$cause) = @_; + my $name = $hash->{NAME}; + my $dbh = $hash->{DBH}; + my $dbloghash = $hash->{dbloghash}; + my $erread; + + $cause = $cause?$cause:"Timeout: process terminated"; + Log3 ($name, 1, "DbRep $name -> BlockingCall $hash->{HELPER}{RUNNING_REPAIR}{fn} pid:$hash->{HELPER}{RUNNING_REPAIR}{pid} $cause"); + + # Datenbankverbindung in DbLog wieder öffenen + my $dbl = $dbloghash->{NAME}; + CommandSet(undef,"$dbl reopen"); + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + $erread = DbRep_afterproc($hash, "repair"); + $erread = ", ".(split("but", $erread))[1] if($erread); + + $dbh->disconnect() if(defined($dbh)); + ReadingsSingleUpdateValue ($hash,"state",$cause, 1); + + delete($hash->{HELPER}{RUNNING_REPAIR}); +return; +} + +#################################################################################################### +# SQL-Statement zusammenstellen für DB-Abfrage +#################################################################################################### +sub DbRep_createSelectSql($$$$$$$$) { + my ($hash,$table,$selspec,$device,$reading,$tf,$tn,$addon) = @_; + my $name = $hash->{NAME}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my ($sql,$devs,$danz,$ranz); + my $tnfull = 0; + + ($devs,$danz,$reading,$ranz) = DbRep_specsForSql($hash,$device,$reading); + + if($tn && $tn =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/) { + $tnfull = 1; + } + + $sql = "SELECT $selspec FROM $table where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if (($tf && $tn)) { + $sql .= "TIMESTAMP >= $tf AND TIMESTAMP ".($tnfull?"<=":"<")." $tn "; + } else { + if ($dbmodel eq "POSTGRESQL") { + $sql .= "true "; + } else { + $sql .= "1 "; + } + } + $sql .= "$addon;"; + +return $sql; +} + +#################################################################################################### +# SQL-Statement zusammenstellen für DB-Updates +#################################################################################################### +sub DbRep_createUpdateSql($$$$$$$$) { + my ($hash,$table,$selspec,$device,$reading,$tf,$tn,$addon) = @_; + my $name = $hash->{NAME}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my ($sql,$devs,$danz,$ranz); + my $tnfull = 0; + + ($devs,$danz,$reading,$ranz) = DbRep_specsForSql($hash,$device,$reading); + + if($tn =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/) { + $tnfull = 1; + } + + $sql = "UPDATE $table SET $selspec AND "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if (($tf && $tn)) { + $sql .= "TIMESTAMP >= $tf AND TIMESTAMP ".($tnfull?"<=":"<")." $tn "; + } else { + if ($dbmodel eq "POSTGRESQL") { + $sql .= "true "; + } else { + $sql .= "1 "; + } + } + $sql .= "$addon;"; + +return $sql; +} + +#################################################################################################### +# SQL-Statement zusammenstellen für Löschvorgänge +#################################################################################################### +sub DbRep_createDeleteSql($$$$$$$) { + my ($hash,$table,$device,$reading,$tf,$tn,$addon) = @_; + my $name = $hash->{NAME}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my ($sql,$devs,$danz,$ranz); + my $tnfull = 0; + + if($table eq "current") { + $sql = "delete FROM $table; "; + return $sql; + } + + ($devs,$danz,$reading,$ranz) = DbRep_specsForSql($hash,$device,$reading); + + if($tn =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/) { + $tnfull = 1; + } + + $sql = "delete FROM $table where "; + $sql .= "DEVICE LIKE '$devs' AND " if($danz <= 1 && $devs !~ m(^%$) && $devs =~ m(\%)); + $sql .= "DEVICE = '$devs' AND " if($danz <= 1 && $devs !~ m(\%)); + $sql .= "DEVICE IN ($devs) AND " if($danz > 1); + $sql .= "READING LIKE '$reading' AND " if($ranz <= 1 && $reading !~ m(^%$) && $reading =~ m(\%)); + $sql .= "READING = '$reading' AND " if($ranz <= 1 && $reading !~ m(\%)); + $sql .= "READING IN ($reading) AND " if($ranz > 1); + if ($tf && $tn) { + $sql .= "TIMESTAMP >= '$tf' AND TIMESTAMP ".($tnfull?"<=":"<")." '$tn' $addon;"; + } else { + if ($dbmodel eq "POSTGRESQL") { + $sql .= "true;"; + } else { + $sql .= "1;"; + } + } + +return $sql; +} + +#################################################################################################### +# Ableiten von Device, Reading-Spezifikationen +#################################################################################################### +sub DbRep_specsForSql($$$) { + my ($hash,$device,$reading) = @_; + my $name = $hash->{NAME}; + + my @dvspcs = devspec2array($device); + my $devs = join(",",@dvspcs); + $devs =~ s/'/''/g; # escape ' with '' + my $danz = $#dvspcs+1; + if ($danz > 1) { + $devs =~ s/,/','/g; + $devs = "'".$devs."'"; + } + Log3 $name, 5, "DbRep $name - Device specifications use for select: $devs"; + + $reading =~ s/'/''/g; # escape ' with '' + my @reads = split(",",$reading); + my $ranz = $#reads+1; + if ($ranz > 1) { + $reading =~ s/,/','/g; + $reading = "'".$reading."'"; + } + Log3 $name, 5, "DbRep $name - Reading specification use for select: $reading"; + +return ($devs,$danz,$reading,$ranz); +} + +#################################################################################################### +# Check ob Zeitgrenzen bzw. Aggregation gesetzt sind, evtl. übertseuern (je nach Funktion) +# Return "1" wenn Bedingung erfüllt, sonst "0" +#################################################################################################### +sub DbRep_checktimeaggr ($) { + my ($hash) = @_; + my $name = $hash->{NAME}; + my $IsTimeSet = 0; + my $IsAggrSet = 0; + my $aggregation = AttrVal($name,"aggregation","no"); + + if ( AttrVal($name,"timestamp_begin",undef) || AttrVal($name,"timestamp_end",undef) || + AttrVal($name,"timeDiffToNow",undef) || AttrVal($name,"timeOlderThan",undef) || AttrVal($name,"timeYearPeriod",undef) ) { + $IsTimeSet = 1; + } + + if ($aggregation ne "no") { + $IsAggrSet = 1; + } + if($hash->{LASTCMD} =~ /delSeqDoublets/) { + $aggregation = ($aggregation eq "no")?"day":$aggregation; # wenn Aggregation "no", für delSeqDoublets immer "day" setzen + $IsAggrSet = 1; + } + if($hash->{LASTCMD} =~ /averageValue/ && AttrVal($name,"averageCalcForm","avgArithmeticMean") eq "avgDailyMeanGWS") { + $aggregation = "day"; # für Tagesmittelwertberechnung des deutschen Wetterdienstes immer "day" + $IsAggrSet = 1; + } + if($hash->{LASTCMD} =~ /delEntries|fetchrows|deviceRename|readingRename|tableCurrentFillup|reduceLog/) { + $IsAggrSet = 0; + $aggregation = "no"; + } + if($hash->{LASTCMD} =~ /deviceRename|readingRename/) { + $IsTimeSet = 0; + } + if($hash->{LASTCMD} =~ /changeValue/) { + if($hash->{HELPER}{COMPLEX}) { + $IsAggrSet = 1; + $aggregation = "day"; + } else { + $IsAggrSet = 0; + $aggregation = "no"; + } + } + if($hash->{LASTCMD} =~ /syncStandby/ ) { + if($aggregation !~ /day|hour|week/) { + $aggregation = "day"; + $IsAggrSet = 1; + } + } + +return ($IsTimeSet,$IsAggrSet,$aggregation); +} + +#################################################################################################### +# ReadingsSingleUpdate für Reading, Value, Event +#################################################################################################### +sub ReadingsSingleUpdateValue ($$$$) { + my ($hash,$reading,$val,$ev) = @_; + my $name = $hash->{NAME}; + + readingsSingleUpdate($hash, $reading, $val, $ev); + DbRep_userexit($name, $reading, $val); + +return; +} + +#################################################################################################### +# Readingsbulkupdate für Reading, Value +# readingsBeginUpdate und readingsEndUpdate muss vor/nach Funktionsaufruf gesetzt werden +#################################################################################################### +sub ReadingsBulkUpdateValue ($$$) { + my ($hash,$reading,$val) = @_; + my $name = $hash->{NAME}; + + readingsBulkUpdate($hash, $reading, $val); + DbRep_userexit($name, $reading, $val); + +return; +} + +#################################################################################################### +# Readingsbulkupdate für processing_time, state +# readingsBeginUpdate und readingsEndUpdate muss vor/nach Funktionsaufruf gesetzt werden +#################################################################################################### +sub ReadingsBulkUpdateTimeState ($$$$) { + my ($hash,$brt,$rt,$sval) = @_; + my $name = $hash->{NAME}; + + if(AttrVal($name, "showproctime", undef)) { + readingsBulkUpdate($hash, "background_processing_time", sprintf("%.4f",$brt)) if(defined($brt)); + DbRep_userexit($name, "background_processing_time", sprintf("%.4f",$brt)) if(defined($brt)); + readingsBulkUpdate($hash, "sql_processing_time", sprintf("%.4f",$rt)) if(defined($rt)); + DbRep_userexit($name, "sql_processing_time", sprintf("%.4f",$rt)) if(defined($rt)); + } + + readingsBulkUpdate($hash, "state", $sval); + DbRep_userexit($name, "state", $sval); + +return; +} + +#################################################################################################### +# Anzeige von laufenden Blocking Prozessen +#################################################################################################### +sub DbRep_getblockinginfo($@) { + my ($hash) = @_; + my $name = $hash->{NAME}; + + my @rows; + our %BC_hash; + my $len = 99; + foreach my $h (values %BC_hash) { + next if($h->{terminated} || !$h->{pid}); + my @allk = keys%{$h}; + foreach my $k (@allk) { + Log3 ($name, 5, "DbRep $name -> $k : ".$h->{$k}); + } + my $fn = (ref($h->{fn}) ? ref($h->{fn}) : $h->{fn}); + my $arg = (ref($h->{arg}) ? ref($h->{arg}) : $h->{arg}); + my $arg1 = substr($arg,0,$len); + $arg1 = $arg1."..." if(length($arg) > $len+1); + my $to = ($h->{timeout} ? $h->{timeout} : "N/A"); + my $conn = ($h->{telnet} ? $h->{telnet} : "N/A"); + push @rows, "$h->{pid}|ESCAPED|$fn|ESCAPED|$arg1|ESCAPED|$to|ESCAPED|$conn"; + } + + # Readingaufbereitung + readingsBeginUpdate($hash); + + if(!@rows) { + ReadingsBulkUpdateTimeState($hash,undef,undef,"done - No BlockingCall processes running"); + readingsEndUpdate($hash, 1); + return; + } + + my $res = ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + $res .= ""; + foreach my $row (@rows) { + $row =~ s/\|ESCAPED\|/<\/td>"; + } + my $tab = $res."
PIDFUNCTIONARGUMENTSTIMEOUTCONNECTEDVIA
/g; + $res .= "
".$row."
"; + + ReadingsBulkUpdateValue ($hash,"BlockingInfo",$tab); + ReadingsBulkUpdateValue ($hash,"Blocking_Count",$#rows+1); + + ReadingsBulkUpdateTimeState($hash,undef,undef,"done"); + readingsEndUpdate($hash, 1); + +return; +} + +#################################################################################################### +# relative Zeitangaben als Sekunden normieren +# +# liefert die Attribute timeOlderThan, timeDiffToNow als Sekunden normiert zurück +#################################################################################################### +sub DbRep_normRelTime($) { + my ($hash) = @_; + my $name = $hash->{NAME}; + my $tdtn = AttrVal($name, "timeDiffToNow", undef); + my $toth = AttrVal($name, "timeOlderThan", undef); + + if($tdtn && $tdtn =~ /^\s*[ydhms]:(([\d]+.[\d]+)|[\d]+)\s*/) { + my ($y,$d,$h,$m,$s); + if($tdtn =~ /.*y:(([\d]+.[\d]+)|[\d]+).*/) { + $y = $tdtn; + $y =~ s/.*y:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($tdtn =~ /.*d:(([\d]+.[\d]+)|[\d]+).*/) { + $d = $tdtn; + $d =~ s/.*d:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($tdtn =~ /.*h:(([\d]+.[\d]+)|[\d]+).*/) { + $h = $tdtn; + $h =~ s/.*h:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($tdtn =~ /.*m:(([\d]+.[\d]+)|[\d]+).*/) { + $m = $tdtn; + $m =~ s/.*m:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($tdtn =~ /.*s:(([\d]+.[\d]+)|[\d]+).*/) { + $s = $tdtn; + $s =~ s/.*s:(([\d]+.[\d]+)|[\d]+).*/$1/e ; + } + + no warnings 'uninitialized'; + Log3($name, 4, "DbRep $name - timeDiffToNow - year: $y, day: $d, hour: $h, min: $m, sec: $s "); + use warnings; + $y = $y?($y*365*86400):0; + $d = $d?($d*86400):0; + $h = $h?($h*3600):0; + $m = $m?($m*60):0; + $s = $s?$s:0; + + $tdtn = $y + $d + $h + $m + $s + 1; # one security second for correct create TimeArray + $tdtn = DbRep_corrRelTime($name,$tdtn,1); + } + + if($toth && $toth =~ /^\s*[ydhms]:(([\d]+.[\d]+)|[\d]+)\s*/) { + my ($y,$d,$h,$m,$s); + if($toth =~ /.*y:(([\d]+.[\d]+)|[\d]+).*/) { + $y = $toth; + $y =~ s/.*y:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($toth =~ /.*d:(([\d]+.[\d]+)|[\d]+).*/) { + $d = $toth; + $d =~ s/.*d:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($toth =~ /.*h:(([\d]+.[\d]+)|[\d]+).*/) { + $h = $toth; + $h =~ s/.*h:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($toth =~ /.*m:(([\d]+.[\d]+)|[\d]+).*/) { + $m = $toth; + $m =~ s/.*m:(([\d]+.[\d]+)|[\d]+).*/$1/e; + } + if($toth =~ /.*s:(([\d]+.[\d]+)|[\d]+).*/) { + $s = $toth; + $s =~ s/.*s:(([\d]+.[\d]+)|[\d]+).*/$1/e ; + } + + no warnings 'uninitialized'; + Log3($name, 4, "DbRep $name - timeOlderThan - year: $y, day: $d, hour: $h, min: $m, sec: $s "); + use warnings; + $y = $y?($y*365*86400):0; + $d = $d?($d*86400):0; + $h = $h?($h*3600):0; + $m = $m?($m*60):0; + $s = $s?$s:0; + + $toth = $y + $d + $h + $m + $s + 1; # one security second for correct create TimeArray + $toth = DbRep_corrRelTime($name,$toth,0); + } +return ($toth,$tdtn); +} + +#################################################################################################### +# Korrektur Schaltjahr und Sommer/Winterzeit bei relativen Zeitangaben +#################################################################################################### +sub DbRep_corrRelTime($$$) { + my ($name,$tim,$tdtn) = @_; + my $hash = $defs{$name}; + + # year als Jahre seit 1900 + # $mon als 0..11 + my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst); + my ($dsec,$dmin,$dhour,$dmday,$dmon,$dyear,$dwday,$dyday,$disdst); + if($tdtn) { + # timeDiffToNow + ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time); # Startzeit Ableitung + ($dsec,$dmin,$dhour,$dmday,$dmon,$dyear,$dwday,$dyday,$disdst) = localtime(time-$tim); # Analyse Zieltimestamp timeDiffToNow + } else { + # timeOlderThan + ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time-$tim); # Startzeit Ableitung + my $mints = $hash->{HELPER}{MINTS}?$hash->{HELPER}{MINTS}:"1970-01-01 01:00:00"; # Timestamp des 1. Datensatzes verwenden falls ermittelt + my ($yyyy1, $mm1, $dd1, $hh1, $min1, $sec1) = ($mints =~ /(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)/); + my $tsend = timelocal($sec1, $min1, $hh1, $dd1, $mm1-1, $yyyy1-1900); + ($dsec,$dmin,$dhour,$dmday,$dmon,$dyear,$dwday,$dyday,$disdst) = localtime($tsend); # Analyse Zieltimestamp timeOlderThan + } + $year += 1900; + $dyear += 1900; + my $k = $year - $dyear; + my $mg = ((int($mon)+1)+(($year-$dyear-1)*12)+(11-int($dmon)+1)); # Gesamtzahl der Monate des Bewertungszeitraumes + my $cly = 0; # Anzahl Schaltjahre innerhalb Beginn und Ende Auswertungszeitraum + my $fly = 0; # erstes Schaltjahr nach Start + my $lly = 0; # letzes Schaltjahr nach Start + while ($dyear+$k >= $dyear) { + my $ily = DbRep_IsLeapYear($name,$dyear+$k); + $cly++ if($ily); + $fly = $dyear+$k if($ily && !$fly); + $lly = $dyear+$k if($ily); + $k--; + } + # Log3($name, 4, "DbRep $name - countleapyear: $cly firstleapyear: $fly lastleapyear: $lly totalmonth: $mg isdaylight:$isdst destdaylight:$disdst"); + if( ($fly <= $year && $mon > 1) && ($lly > $dyear || ($lly = $dyear && $dmon < 1)) ) { + $tim += $cly*86400; + # Log3($name, 4, "DbRep $name - leap year correction 1"); + } else { + $tim += ($cly-1)*86400 if($cly); + # Log3($name, 4, "DbRep $name - leap year correction 2"); + } + + # Sommer/Winterzeitkorrektur + $tim += ($disdst-$isdst)*3600 if($disdst != $isdst); + +return $tim; +} + +#################################################################################################### +# liefert zurück ob übergebenes Jahr ein Schaltjahr ist ($ily = 1) +# +# Es gilt: +# - Wenn ein Jahr durch 4 teilbar ist, ist es ein Schaltjahr, aber +# - wenn es durch 100 teilbar ist, ist es kein schaltjahr, außer +# - es ist durch 400 teilbar, dann ist es ein schaltjahr +# +#################################################################################################### +sub DbRep_IsLeapYear($$) { + my ($name,$year) = @_; + my $ily = 0; + if ($year % 4 == 0 && $year % 100 != 0 || $year % 400 == 0) { # $year modulo 4 -> muß 0 sein + $ily = 1; + } + Log3($name, 4, "DbRep $name - Year $year is leap year") if($ily); +return $ily; +} + +############################################################################### +# Zeichencodierung für Fileexport filtern +############################################################################### +sub DbRep_charfilter($) { + my ($txt) = @_; + + # nur erwünschte Zeichen, Filtern von Steuerzeichen + $txt =~ tr/ A-Za-z0-9!"#$§%&'()*+,-.\/:;<=>?@[\\]^_`{|}~äöüÄÖÜ߀//cd; + +return($txt); +} + +################################################################################### +# Befehl vor Procedure ausführen +################################################################################### +sub DbRep_beforeproc ($$) { + my ($hash, $txt) = @_; + my $name = $hash->{NAME}; + + # Befehl vor Procedure ausführen + my $ebd = AttrVal($name, "executeBeforeProc", undef); + if($ebd) { + Log3 ($name, 3, "DbRep $name - execute command before $txt: '$ebd' "); + my $err = AnalyzeCommandChain(undef, $ebd); + if ($err) { + Log3 ($name, 2, "DbRep $name - command message before $txt: \"$err\" "); + my $erread = "Warning - message from command before $txt appeared"; + ReadingsSingleUpdateValue ($hash, "before".$txt."_message", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", $erread, 1); + } + } + +return; +} + +################################################################################### +# Befehl nach Procedure ausführen +################################################################################### +sub DbRep_afterproc ($$) { + my ($hash, $txt) = @_; + my $name = $hash->{NAME}; + my $erread; + + # Befehl nach Procedure ausführen + no warnings 'uninitialized'; + my $ead = AttrVal($name, "executeAfterProc", undef); + if($ead) { + Log3 ($name, 4, "DbRep $name - execute command after $txt: '$ead' "); + my $err = AnalyzeCommandChain(undef, $ead); + if ($err) { + Log3 ($name, 2, "DbRep $name - command message after $txt: \"$err\" "); + ReadingsSingleUpdateValue ($hash, "after".$txt."_message", $err, 1); + $erread = "Warning - $txt finished, but command message after $txt appeared"; + } + } + +return $erread; +} + +############################################################################################## +# timestamp_begin, timestamp_end bei Einsatz datetime-Picker entsprechend +# den Anforderungen formatieren +############################################################################################## +sub DbRep_formatpicker ($) { + my ($str) = @_; + if ($str =~ /^(\d{4})-(\d{2})-(\d{2})_(\d{2}):(\d{2})$/) { + # Anpassung für datetime-Picker Widget + $str =~ s/_/ /; + $str = $str.":00"; + } + if ($str =~ /^(\d{4})-(\d{2})-(\d{2})_(\d{2}):(\d{2}):(\d{2})$/) { + # Anpassung für datetime-Picker Widget + $str =~ s/_/ /; + } +return $str; +} + +#################################################################################################### +# userexit - Funktion um userspezifische Programmaufrufe nach Aktualisierung eines Readings +# zu ermöglichen, arbeitet OHNE Event abhängig vom Attr userExitFn +# +# Aufruf der mit $name,$reading,$value +#################################################################################################### +sub DbRep_userexit ($$$) { + my ($name,$reading,$value) = @_; + my $hash = $defs{$name}; + + return if(!$hash->{HELPER}{USEREXITFN}); + + if(!defined($reading)) {$reading = "";} + if(!defined($value)) {$value = "";} + $value =~ s/\\/\\\\/g; # escapen of chars for evaluation + $value =~ s/'/\\'/g; + + my $re = $hash->{HELPER}{UEFN_REGEXP}?$hash->{HELPER}{UEFN_REGEXP}:".*:.*"; + + if("$reading:$value" =~ m/^$re$/ ) { + my @res; + my $cmd = $hash->{HELPER}{USEREXITFN}."('$name','$reading','$value')"; + $cmd = "{".$cmd."}"; + my $r = AnalyzeCommandChain(undef, $cmd); + } +return; +} + +#################################################################################################### +# delete Readings before new operation +#################################################################################################### +sub DbRep_delread($;$$) { + # Readings löschen die nicht in der Ausnahmeliste (Attr readingPreventFromDel) stehen + my ($hash,$shutdown) = @_; + my $name = $hash->{NAME}; + my @allrds = keys%{$defs{$name}{READINGS}}; + if($shutdown) { + my $do = 0; + foreach my $key(@allrds) { + # Highlighted Readings löschen und save statefile wegen Inkompatibilitär beim Restart + if($key =~ /{HELPER}{RDPFDEL}) if($hash->{HELPER}{RDPFDEL}); + if(@rdpfdel) { + foreach my $key(@allrds) { + # Log3 ($name, 1, "DbRep $name - Reading Schlüssel: $key"); + my $dodel = 1; + foreach my $rdpfdel(@rdpfdel) { + if($key =~ /$rdpfdel/ || $key eq "state") { + $dodel = 0; + } + } + if($dodel) { + delete($defs{$name}{READINGS}{$key}); + } + } + } else { + foreach my $key(@allrds) { + # Log3 ($name, 1, "DbRep $name - Reading Schlüssel: $key"); + delete($defs{$name}{READINGS}{$key}) if($key ne "state"); + } + } +return undef; +} + +#################################################################################################### +# erstellen neues SQL-File für Dumproutine +#################################################################################################### +sub DbRep_NewDumpFilename ($$$$$){ + my ($sql_text,$dump_path,$dbname,$time_stamp,$character_set) = @_; + my $part = ""; + my $sql_file = $dump_path.$dbname."_".$time_stamp.$part.".sql"; + my $backupfile = $dbname."_".$time_stamp.$part.".sql"; + + $sql_text .= "/*!40101 SET NAMES '".$character_set."' */;\n"; + $sql_text .= "SET FOREIGN_KEY_CHECKS=0;\n"; + + my ($filesize,$err) = DbRep_WriteToDumpFile($sql_text,$sql_file); + if($err) { + return (undef,undef,undef,undef,$err); + } + chmod(0777,$sql_file); + $sql_text = ""; + my $first_insert = 0; + +return ($sql_text,$first_insert,$sql_file,$backupfile,undef); +} + +#################################################################################################### +# Schreiben DB-Dumps in SQL-File +#################################################################################################### +sub DbRep_WriteToDumpFile ($$) { + my ($inh,$sql_file) = @_; + my $filesize; + my $err = 0; + + if(length($inh) > 0) { + unless(open(DATEI,">>$sql_file")) { + $err = "Can't open file '$sql_file' for write access"; + return (undef,$err); + } + print DATEI $inh; + close(DATEI); + + my $fref = stat($sql_file); + if ($fref =~ /ARRAY/) { + $filesize = (@{stat($sql_file)})[7]; + } else { + $filesize = (stat($sql_file))[7]; + } + } + +return ($filesize,undef); +} + +#################################################################################################### +# Filesize (Byte) umwandeln in KB bzw. MB +#################################################################################################### +sub DbRep_byteOutput ($) { + my $bytes = shift; + + return if(!defined($bytes)); + return $bytes if(!looks_like_number($bytes)); + my $suffix = "Bytes"; + if ($bytes >= 1024) { $suffix = "KB"; $bytes = sprintf("%.2f",($bytes/1024));}; + if ($bytes >= 1024) { $suffix = "MB"; $bytes = sprintf("%.2f",($bytes/1024));}; + my $ret = sprintf "%.2f",$bytes; + $ret.=' '.$suffix; + +return $ret; +} + +#################################################################################################### +# Schreibroutine in DbRep Keyvalue-File +#################################################################################################### +sub DbRep_setCmdFile($$$) { + my ($key,$value,$hash) = @_; + my $fName = $attr{global}{modpath}."/FHEM/FhemUtils/cacheDbRep"; + + my $param = { + FileName => $fName, + ForceType => "file", + }; + my ($err, @old) = FileRead($param); + + DbRep_createCmdFile($hash) if($err); + + my @new; + my $fnd; + foreach my $l (@old) { + if($l =~ m/^$key:/) { + $fnd = 1; + push @new, "$key:$value" if(defined($value)); + } else { + push @new, $l; + } + } + push @new, "$key:$value" if(!$fnd && defined($value)); + +return FileWrite($param, @new); +} + +#################################################################################################### +# anlegen Keyvalue-File für DbRep wenn nicht vorhanden +#################################################################################################### +sub DbRep_createCmdFile ($) { + my ($hash) = @_; + my $fName = $attr{global}{modpath}."/FHEM/FhemUtils/cacheDbRep"; + + my $param = { + FileName => $fName, + ForceType => "file", + }; + my @new; + push(@new, "# This file is auto generated from 93_DbRep.", + "# Please do not modify, move or delete it.", + ""); + +return FileWrite($param, @new); +} + +#################################################################################################### +# Leseroutine aus DbRep Keyvalue-File +#################################################################################################### +sub DbRep_getCmdFile($) { + my ($key) = @_; + my $fName = $attr{global}{modpath}."/FHEM/FhemUtils/cacheDbRep"; + my $param = { + FileName => $fName, + ForceType => "file", + }; + my ($err, @l) = FileRead($param); + return ($err, undef) if($err); + for my $l (@l) { + return (undef, $1) if($l =~ m/^$key:(.*)/); + } + +return (undef, undef); +} + +#################################################################################################### +# Tabellenoptimierung MySQL +#################################################################################################### +sub DbRep_mysqlOptimizeTables ($$@) { + my ($hash,$dbh,@tablenames) = @_; + my $name = $hash->{NAME}; + my $dbname = $hash->{DATABASE}; + my $ret = 0; + my $opttbl = 0; + my $db_tables = $hash->{HELPER}{DBTABLES}; + my ($engine,$tablename,$query,$sth,$value,$db_MB_start,$db_MB_end); + + # Anfangsgröße ermitteln + $query = "SELECT sum( data_length + index_length ) / 1024 / 1024 FROM information_schema.TABLES where table_schema='$dbname' "; + Log3 ($name, 5, "DbRep $name - current query: $query "); + eval { $sth = $dbh->prepare($query); + $sth->execute; + }; + if ($@) { + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! MySQL-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return ($@,undef,undef); + } + $value = $sth->fetchrow(); + + $db_MB_start = sprintf("%.2f",$value); + Log3 ($name, 3, "DbRep $name - Size of database $dbname before optimize (MB): $db_MB_start"); + + Log3($name, 3, "DbRep $name - Optimizing tables"); + + foreach $tablename (@tablenames) { + #optimize table if engine supports optimization + $engine = ''; + $engine = uc($db_tables->{$tablename}{Engine}) if($db_tables->{$tablename}{Engine}); + + if ($engine =~ /(MYISAM|BDB|INNODB|ARIA)/) { + Log3($name, 3, "DbRep $name - Optimizing table `$tablename` ($engine). It will take a while."); + my $sth_to = $dbh->prepare("OPTIMIZE TABLE `$tablename`"); + $ret = $sth_to->execute; + + if ($ret) { + Log3($name, 3, "DbRep $name - Table ".($opttbl+1)." `$tablename` optimized successfully."); + $opttbl++; + } else { + Log3($name, 2, "DbRep $name - Error while optimizing table $tablename. Continue with next table or backup."); + } + } + } + + Log3($name, 3, "DbRep $name - $opttbl tables have been optimized.") if($opttbl > 0); + + # Endgröße ermitteln + eval { $sth->execute; }; + if ($@) { + Log3 ($name, 2, "DbRep $name - Error executing: '".$query."' ! MySQL-Error: ".$@); + $sth->finish; + $dbh->disconnect; + return ($@,undef,undef); + } + + $value = $sth->fetchrow(); + $db_MB_end = sprintf("%.2f",$value); + Log3 ($name, 3, "DbRep $name - Size of database $dbname after optimize (MB): $db_MB_end"); + + $sth->finish; + +return (undef,$db_MB_start,$db_MB_end); +} + +#################################################################################################### +# Dump-Files im dumpDirLocal löschen bis auf die letzten "n" +#################################################################################################### +sub DbRep_deldumpfiles ($$) { + my ($hash,$bfile) = @_; + my $name = $hash->{NAME}; + my $dbloghash = $hash->{dbloghash}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path_loc = AttrVal($name,"dumpDirLocal", $dump_path_def); + $dump_path_loc = $dump_path_loc."/" unless($dump_path_loc =~ m/\/$/); + my $dfk = AttrVal($name,"dumpFilesKeep", 3); + my $pfix = (split '\.', $bfile)[1]; + my $dbname = (split '_', $bfile)[0]; + my $file = $dbname."_.*".$pfix.".*"; # Files mit/ohne Endung "gzip" berücksichtigen + my @fd; + + if(!opendir(DH, $dump_path_loc)) { + push(@fd, "No files deleted - Can't open path '$dump_path_loc'"); + return @fd; + } + my @files = sort grep {/^$file$/} readdir(DH); + + my $fref = stat("$dump_path_loc/$bfile"); + + if ($fref =~ /ARRAY/) { + @files = sort { (@{stat("$dump_path_loc/$a")})[9] cmp (@{stat("$dump_path_loc/$b")})[9] } @files + if(AttrVal("global", "archivesort", "alphanum") eq "timestamp"); + } else { + @files = sort { (stat("$dump_path_loc/$a"))[9] cmp (stat("$dump_path_loc/$b"))[9] } @files + if(AttrVal("global", "archivesort", "alphanum") eq "timestamp"); + } + + closedir(DH); + + Log3($name, 5, "DbRep $name - Dump files have been found in dumpDirLocal '$dump_path_loc': ".join(', ',@files) ); + + my $max = int(@files)-$dfk; + + for(my $i = 0; $i < $max; $i++) { + push(@fd, $files[$i]); + Log3($name, 3, "DbRep $name - Deleting old dumpfile '$files[$i]' "); + unlink("$dump_path_loc/$files[$i]"); + } + +return @fd; +} + +#################################################################################################### +# Dumpfile komprimieren +#################################################################################################### +sub DbRep_dumpCompress ($$) { + my ($hash,$bfile) = @_; + my $name = $hash->{NAME}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path_loc = AttrVal($name,"dumpDirLocal", $dump_path_def); + $dump_path_loc =~ s/(\/$|\\$)//; + my $input = $dump_path_loc."/".$bfile; + my $output = $dump_path_loc."/".$bfile.".gzip"; + + Log3($name, 3, "DbRep $name - compress file $input"); + + my $stat = gzip $input => $output ,BinModeIn => 1; + if($GzipError) { + Log3($name, 2, "DbRep $name - gzip of $input failed: $GzipError"); + return ($GzipError,$input); + } + + Log3($name, 3, "DbRep $name - file compressed to output file: $output"); + unlink("$input"); + Log3($name, 3, "DbRep $name - input file deleted: $input"); + +return (undef,$bfile.".gzip"); +} + +#################################################################################################### +# Dumpfile dekomprimieren +#################################################################################################### +sub DbRep_dumpUnCompress ($$) { + my ($hash,$bfile) = @_; + my $name = $hash->{NAME}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path_loc = AttrVal($name,"dumpDirLocal", $dump_path_def); + $dump_path_loc =~ s/(\/$|\\$)//; + my $input = $dump_path_loc."/".$bfile; + my $outfile = $bfile; + $outfile =~ s/\.gzip//; + my $output = $dump_path_loc."/".$outfile; + + Log3($name, 3, "DbRep $name - uncompress file $input"); + + my $stat = gunzip $input => $output ,BinModeOut => 1; + if($GunzipError) { + Log3($name, 2, "DbRep $name - gunzip of $input failed: $GunzipError"); + return ($GunzipError,$input); + } + + Log3($name, 3, "DbRep $name - file uncompressed to output file: $output"); + + # Größe dekomprimiertes File ermitteln + my @a = split(' ',qx(du $output)) if ($^O =~ m/linux/i || $^O =~ m/unix/i); + + my $filesize = ($a[0])?($a[0]*1024):undef; + my $fsize = DbRep_byteOutput($filesize); + Log3 ($name, 3, "DbRep $name - Size of uncompressed file: ".$fsize); + +return (undef,$outfile); +} + +#################################################################################################### +# erzeugtes Dump-File aus dumpDirLocal zum FTP-Server übertragen +#################################################################################################### +sub DbRep_sendftp ($$) { + my ($hash,$bfile) = @_; + my $name = $hash->{NAME}; + my $dump_path_def = $attr{global}{modpath}."/log/"; + my $dump_path_loc = AttrVal($name,"dumpDirLocal", $dump_path_def); + my $file = (split /[\/]/, $bfile)[-1]; + my $ftpto = AttrVal($name,"ftpTimeout",30); + my $ftpUse = AttrVal($name,"ftpUse",0); + my $ftpuseSSL = AttrVal($name,"ftpUseSSL",0); + my $ftpDir = AttrVal($name,"ftpDir","/"); + my $ftpPort = AttrVal($name,"ftpPort",21); + my $ftpServer = AttrVal($name,"ftpServer",undef); + my $ftpUser = AttrVal($name,"ftpUser","anonymous"); + my $ftpPwd = AttrVal($name,"ftpPwd",undef); + my $ftpPassive = AttrVal($name,"ftpPassive",0); + my $ftpDebug = AttrVal($name,"ftpDebug",0); + my $fdfk = AttrVal($name,"ftpDumpFilesKeep", 3); + my $pfix = (split '\.', $bfile)[1]; + my $dbname = (split '_', $bfile)[0]; + my $ftpl = $dbname."_.*".$pfix.".*"; # Files mit/ohne Endung "gzip" berücksichtigen + my ($ftperr,$ftpmsg,$ftp); + + # kein FTP verwenden oder möglich + return ($ftperr,$ftpmsg) if((!$ftpUse && !$ftpuseSSL) || !$bfile); + + if(!$ftpServer) { + $ftperr = "FTP-Error: FTP-Server isn't set."; + Log3($name, 2, "DbRep $name - $ftperr"); + return ($ftperr,undef); + } + + if(!opendir(DH, $dump_path_loc)) { + $ftperr = "FTP-Error: Can't open path '$dump_path_loc'"; + Log3($name, 2, "DbRep $name - $ftperr"); + return ($ftperr,undef); + } + + my $mod_ftpssl = 0; + my $mod_ftp = 0; + my $mod; + + if ($ftpuseSSL) { + # FTP mit SSL soll genutzt werden + $mod = "Net::FTPSSL => e.g. with 'sudo cpan -i Net::FTPSSL' "; + eval { require Net::FTPSSL; }; + if(!$@){ + $mod_ftpssl = 1; + import Net::FTPSSL; + } + } else { + # nur FTP + $mod = "Net::FTP"; + eval { require Net::FTP; }; + if(!$@){ + $mod_ftp = 1; + import Net::FTP; + } + } + + if ($ftpuseSSL && $mod_ftpssl) { + # use ftp-ssl + my $enc = "E"; + eval { $ftp = Net::FTPSSL->new($ftpServer, Port => $ftpPort, Timeout => $ftpto, Debug => $ftpDebug, Encryption => $enc) } + or $ftperr = "FTP-SSL-ERROR: Can't connect - $@"; + } elsif (!$ftpuseSSL && $mod_ftp) { + # use plain ftp + eval { $ftp = Net::FTP->new($ftpServer, Port => $ftpPort, Timeout => $ftpto, Debug => $ftpDebug, Passive => $ftpPassive) } + or $ftperr = "FTP-Error: Can't connect - $@"; + } else { + $ftperr = "FTP-Error: required module couldn't be loaded. You have to install it first: $mod."; + } + if ($ftperr) { + Log3($name, 2, "DbRep $name - $ftperr"); + return ($ftperr,undef); + } + + my $pwdstr = $ftpPwd?$ftpPwd:" "; + $ftp->login($ftpUser, $ftpPwd) or $ftperr = "FTP-Error: Couldn't login with user '$ftpUser' and password '$pwdstr' "; + if ($ftperr) { + Log3($name, 2, "DbRep $name - $ftperr"); + return ($ftperr,undef); + } + + $ftp->binary(); + + # FTP Verzeichnis setzen + $ftp->cwd($ftpDir) or $ftperr = "FTP-Error: Couldn't change directory to '$ftpDir' "; + if ($ftperr) { + Log3($name, 2, "DbRep $name - $ftperr"); + return ($ftperr,undef); + } + + $dump_path_loc =~ s/(\/$|\\$)//; + Log3($name, 3, "DbRep $name - FTP: transferring ".$dump_path_loc."/".$file); + + $ftpmsg = $ftp->put($dump_path_loc."/".$file); + if (!$ftpmsg) { + $ftperr = "FTP-Error: Couldn't transfer ".$file." to ".$ftpServer." into dir ".$ftpDir; + Log3($name, 2, "DbRep $name - $ftperr"); + } else { + $ftpmsg = "FTP: ".$file." transferred successfully to ".$ftpServer." into dir ".$ftpDir; + Log3($name, 3, "DbRep $name - $ftpmsg"); + } + + # Versionsverwaltung FTP-Verzeichnis + my (@ftl,@ftpfd); + if($ftpuseSSL) { + @ftl = sort grep {/^$ftpl$/} $ftp->nlst(); + } else { + @ftl = sort grep {/^$ftpl$/} @{$ftp->ls()}; + } + Log3($name, 5, "DbRep $name - FTP: filelist of \"$ftpDir\": @ftl"); + my $max = int(@ftl)-$fdfk; + for(my $i = 0; $i < $max; $i++) { + push(@ftpfd, $ftl[$i]); + Log3($name, 3, "DbRep $name - FTP: deleting old dumpfile '$ftl[$i]' "); + $ftp->delete($ftl[$i]); + } + +return ($ftperr,$ftpmsg,@ftpfd); +} + +#################################################################################################### +# Test auf Daylight saving time +#################################################################################################### +sub DbRep_dsttest ($$$) { + my ($hash,$runtime,$aggsec) = @_; + my $name = $hash->{NAME}; + my $dstchange = 0; + + # der Wechsel der daylight saving time wird dadurch getestet, dass geprüft wird + # ob im Vergleich der aktuellen zur nächsten Selektionsperiode von "$aggsec (day, week, month)" + # ein Wechsel der daylight saving time vorliegt + + my $dst = (localtime($runtime))[8]; # ermitteln daylight saving aktuelle runtime + my $time_str = localtime($runtime+$aggsec); # textual time representation + my $dst_new = (localtime($runtime+$aggsec))[8]; # ermitteln daylight saving nächste runtime + + if ($dst != $dst_new) { + $dstchange = 1; + } + + Log3 ($name, 5, "DbRep $name - Daylight savings changed: $dstchange (on $time_str)"); + +return $dstchange; +} + +#################################################################################################### +# Counthash Untersuchung +# Logausgabe der Anzahl verarbeiteter Datensätze pro Zeitraum / Aggregation +# Rückgabe eines ncp-hash (no calc in period) mit den Perioden für die keine Differenz berechnet +# werden konnte weil nur ein Datensatz in der Periode zur Verfügung stand +#################################################################################################### +sub DbRep_calcount ($$) { + my ($hash,$ch) = @_; + my $name = $hash->{NAME}; + my %ncp = (); + + Log3 ($name, 4, "DbRep $name - count of values used for calc:"); + foreach my $key (sort(keys%{$ch})) { + Log3 ($name, 4, "$key => ". $ch->{$key}); + + if($ch->{$key} eq "1") { + $ncp{"$key"} = " ||"; + } + } +return \%ncp; +} + +#################################################################################################### +# Funktionsergebnisse in Datenbank schreiben +#################################################################################################### +sub DbRep_OutputWriteToDB($$$$$) { + my ($name,$device,$reading,$arrstr,$optxt) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbmodel = $hash->{dbloghash}{MODEL}; + my $DbLogType = AttrVal($hash->{dbloghash}{NAME}, "DbLogType", "History"); + my $supk = AttrVal($hash->{dbloghash}{NAME}, "noSupportPK", 0); + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + $device =~ s/[^A-Za-z\/\d_\.-]/\//g; + $reading =~ s/[^A-Za-z\/\d_\.-]/\//g; + my $type = "calculated"; + my $event = "calculated"; + my $unit = ""; + my $wrt = 0; + my $irowdone = 0; + my ($dbh,$sth_ih,$sth_uh,$sth_ic,$sth_uc,$err,$timestamp,$value,$date,$time,$rsf,$aggr,@row_array); + + if(!$hash->{dbloghash}{HELPER}{COLSET}) { + $err = "No result of \"$hash->{LASTCMD}\" to database written. Cause: column width in \"$hash->{DEF}\" isn't set"; + return ($wrt,$irowdone,$err); + } + + no warnings 'uninitialized'; + (undef,undef,$aggr) = DbRep_checktimeaggr($hash); + $reading = $optxt."_".$aggr."_".AttrVal($name, "readingNameMap", $reading); + + $type = $defs{$device}{TYPE} if($defs{$device}); # $type vom Device ableiten + + if($optxt =~ /avg|sum/) { + my @arr = split("\\|", $arrstr); + foreach my $row (@arr) { + my @a = split("#", $row); + my $runtime_string = $a[0]; # Aggregations-Alias (nicht benötigt) + $value = defined($a[1])?sprintf("%.4f",$a[1]):undef; + $rsf = $a[2]; # Datum / Zeit für DB-Speicherung + ($date,$time) = split("_",$rsf); + $time =~ s/-/:/g if($time); + + if($time !~ /^(\d{2}):(\d{2}):(\d{2})$/) { + if($aggr =~ /no|day|week|month/) { + $time = "23:59:58"; + } elsif ($aggr =~ /hour/) { + $time = "$time:59:58"; + } + } + if ($value) { + # Daten auf maximale Länge beschneiden (DbLog-Funktion !) + ($device,$type,$event,$reading,$value,$unit) = DbLog_cutCol($hash->{dbloghash},$device,$type,$event,$reading,$value,$unit); + push(@row_array, "$date $time|$device|$type|$event|$reading|$value|$unit"); + } + } + } + + if($optxt =~ /min|max|diff/) { + my %rh = split("§", $arrstr); + foreach my $key (sort(keys(%rh))) { + my @k = split("\\|",$rh{$key}); + $rsf = $k[2]; # Datum / Zeit für DB-Speicherung + $value = defined($k[1])?sprintf("%.4f",$k[1]):undef; + ($date,$time) = split("_",$rsf); + $time =~ s/-/:/g if($time); + + if($time !~ /^(\d{2}):(\d{2}):(\d{2})$/) { + if($aggr =~ /no|day|week|month/) { + $time = "23:59:58"; + } elsif ($aggr =~ /hour/) { + $time = "$time:59:58"; + } + } + if ($value) { + # Daten auf maximale Länge beschneiden (DbLog-Funktion !) + ($device,$type,$event,$reading,$value,$unit) = DbLog_cutCol($hash->{dbloghash},$device,$type,$event,$reading,$value,$unit); + push(@row_array, "$date $time|$device|$type|$event|$reading|$value|$unit"); + } + } + } + + if (@row_array) { + # Schreibzyklus aktivieren + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, mysql_enable_utf8 => $utf8 });}; + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + return ($wrt,$irowdone,$err); + } + + # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK + my ($usepkh,$usepkc,$pkh,$pkc); + if (!$supk) { + ($usepkh,$usepkc,$pkh,$pkc) = DbRep_checkUsePK($hash,$dbloghash,$dbh); + } else { + Log3 $hash->{NAME}, 5, "DbRep $name -> Primary Key usage suppressed by attribute noSupportPK in DbLog \"$dblogname\""; + } + + if (lc($DbLogType) =~ m(history)) { + # insert history mit/ohne primary key + if ($usepkh && $dbloghash->{MODEL} eq 'MYSQL') { + eval { $sth_ih = $dbh->prepare_cached("INSERT IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'SQLITE') { + eval { $sth_ih = $dbh->prepare_cached("INSERT OR IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'POSTGRESQL') { + eval { $sth_ih = $dbh->prepare_cached("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; + } else { + eval { $sth_ih = $dbh->prepare_cached("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + return ($wrt,$irowdone,$err); + } + # update history mit/ohne primary key + if ($usepkh && $hash->{MODEL} eq 'MYSQL') { + $sth_uh = $dbh->prepare("REPLACE INTO history (TYPE, EVENT, VALUE, UNIT, TIMESTAMP, DEVICE, READING) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkh && $hash->{MODEL} eq 'SQLITE') { + $sth_uh = $dbh->prepare("INSERT OR REPLACE INTO history (TYPE, EVENT, VALUE, UNIT, TIMESTAMP, DEVICE, READING) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkh && $hash->{MODEL} eq 'POSTGRESQL') { + $sth_uh = $dbh->prepare("INSERT INTO history (TYPE, EVENT, VALUE, UNIT, TIMESTAMP, DEVICE, READING) VALUES (?,?,?,?,?,?,?) ON CONFLICT ($pkc) + DO UPDATE SET TIMESTAMP=EXCLUDED.TIMESTAMP, DEVICE=EXCLUDED.DEVICE, TYPE=EXCLUDED.TYPE, EVENT=EXCLUDED.EVENT, READING=EXCLUDED.READING, + VALUE=EXCLUDED.VALUE, UNIT=EXCLUDED.UNIT"); + } else { + $sth_uh = $dbh->prepare("UPDATE history SET TYPE=?, EVENT=?, VALUE=?, UNIT=? WHERE (TIMESTAMP=?) AND (DEVICE=?) AND (READING=?)"); + } + } + + if (lc($DbLogType) =~ m(current) ) { + # insert current mit/ohne primary key + if ($usepkc && $hash->{MODEL} eq 'MYSQL') { + eval { $sth_ic = $dbh->prepare("INSERT IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkc && $hash->{MODEL} eq 'SQLITE') { + eval { $sth_ic = $dbh->prepare("INSERT OR IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkc && $hash->{MODEL} eq 'POSTGRESQL') { + eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; + } else { + # old behavior + eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + return ($wrt,$irowdone,$err); + } + # update current mit/ohne primary key + if ($usepkc && $hash->{MODEL} eq 'MYSQL') { + $sth_uc = $dbh->prepare("REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkc && $hash->{MODEL} eq 'SQLITE') { + $sth_uc = $dbh->prepare("INSERT OR REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkc && $hash->{MODEL} eq 'POSTGRESQL') { + $sth_uc = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT ($pkc) + DO UPDATE SET TIMESTAMP=EXCLUDED.TIMESTAMP, DEVICE=EXCLUDED.DEVICE, TYPE=EXCLUDED.TYPE, EVENT=EXCLUDED.EVENT, READING=EXCLUDED.READING, + VALUE=EXCLUDED.VALUE, UNIT=EXCLUDED.UNIT"); + } else { + $sth_uc = $dbh->prepare("UPDATE current SET TIMESTAMP=?, TYPE=?, EVENT=?, VALUE=?, UNIT=? WHERE (DEVICE=?) AND (READING=?)"); + } + } + + eval { $dbh->begin_work() if($dbh->{AutoCommit}); }; + if ($@) { + Log3($name, 2, "DbRep $name -> Error start transaction for history - $@"); + } + + Log3 $hash->{NAME}, 4, "DbRep $name - data prepared to db write:"; + + # SQL-Startzeit + my $wst = [gettimeofday]; + + my $ihs = 0; + my $uhs = 0; + foreach my $row (@row_array) { + my @a = split("\\|",$row); + $timestamp = $a[0]; + $device = $a[1]; + $type = $a[2]; + $event = $a[3]; + $reading = $a[4]; + $value = $a[5]; + $unit = $a[6]; + Log3 $hash->{NAME}, 4, "DbRep $name - $row"; + + eval { + # update oder insert history + if (lc($DbLogType) =~ m(history) ) { + my $rv_uh = $sth_uh->execute($type,$event,$value,$unit,$timestamp,$device,$reading); + if ($rv_uh == 0) { + $sth_ih->execute($timestamp,$device,$type,$event,$reading,$value,$unit); + $ihs++; + } else { + $uhs++; + } + } + # update oder insert current + if (lc($DbLogType) =~ m(current) ) { + my $rv_uc = $sth_uc->execute($timestamp,$type,$event,$value,$unit,$device,$reading); + if ($rv_uc == 0) { + $sth_ic->execute($timestamp,$device,$type,$event,$reading,$value,$unit); + } + } + }; + + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->rollback; + $dbh->disconnect; + $ihs = 0; + $uhs = 0; + return ($wrt,0,$err); + } else { + $irowdone++; + } + } + + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + $dbh->disconnect; + + Log3 $hash->{NAME}, 3, "DbRep $name - number of lines updated in \"$dblogname\": $uhs"; + Log3 $hash->{NAME}, 3, "DbRep $name - number of lines inserted into \"$dblogname\": $ihs"; + + # SQL-Laufzeit ermitteln + $wrt = tv_interval($wst); + } + +return ($wrt,$irowdone,$err); +} + +#################################################################################################### +# Werte eines Array in DB schreiben +# Übergabe-Array: $date_ESC_$time_ESC_$device_ESC_$type_ESC_$event_ESC_$reading_ESC_$value_ESC_$unit +# $histupd = 1 wenn history update, $histupd = 0 nur history insert +# +#################################################################################################### +sub DbRep_WriteToDB($$$@) { + my ($name,$dbh,$dbloghash,$histupd,@row_array) = @_; + my $hash = $defs{$name}; + my $dblogname = $dbloghash->{NAME}; + my $DbLogType = AttrVal($dbloghash->{NAME}, "DbLogType", "History"); + my $supk = AttrVal($dbloghash->{NAME}, "noSupportPK", 0); + my $wrt = 0; + my $irowdone = 0; + my ($sth_ih,$sth_uh,$sth_ic,$sth_uc,$err); + + # check ob PK verwendet wird, @usepkx?Anzahl der Felder im PK:0 wenn kein PK, $pkx?Namen der Felder:none wenn kein PK + my ($usepkh,$usepkc,$pkh,$pkc); + if (!$supk) { + ($usepkh,$usepkc,$pkh,$pkc) = DbRep_checkUsePK($hash,$dbloghash,$dbh); + } else { + Log3 $hash->{NAME}, 5, "DbRep $name -> Primary Key usage suppressed by attribute noSupportPK in DbLog \"$dblogname\""; + } + + if (lc($DbLogType) =~ m(history)) { + # insert history mit/ohne primary key + if ($usepkh && $dbloghash->{MODEL} eq 'MYSQL') { + eval { $sth_ih = $dbh->prepare_cached("INSERT IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'SQLITE') { + eval { $sth_ih = $dbh->prepare_cached("INSERT OR IGNORE INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkh && $dbloghash->{MODEL} eq 'POSTGRESQL') { + eval { $sth_ih = $dbh->prepare_cached("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; + } else { + eval { $sth_ih = $dbh->prepare_cached("INSERT INTO history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + return ($wrt,$irowdone,$err); + } + # update history mit/ohne primary key + if ($usepkh && $dbloghash->{MODEL} eq 'MYSQL') { + $sth_uh = $dbh->prepare("REPLACE INTO history (TYPE, EVENT, VALUE, UNIT, TIMESTAMP, DEVICE, READING) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkh && $dbloghash->{MODEL} eq 'SQLITE') { + $sth_uh = $dbh->prepare("INSERT OR REPLACE INTO history (TYPE, EVENT, VALUE, UNIT, TIMESTAMP, DEVICE, READING) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkh && $dbloghash->{MODEL} eq 'POSTGRESQL') { + $sth_uh = $dbh->prepare("INSERT INTO history (TYPE, EVENT, VALUE, UNIT, TIMESTAMP, DEVICE, READING) VALUES (?,?,?,?,?,?,?) ON CONFLICT ($pkc) + DO UPDATE SET TIMESTAMP=EXCLUDED.TIMESTAMP, DEVICE=EXCLUDED.DEVICE, TYPE=EXCLUDED.TYPE, EVENT=EXCLUDED.EVENT, READING=EXCLUDED.READING, + VALUE=EXCLUDED.VALUE, UNIT=EXCLUDED.UNIT"); + } else { + $sth_uh = $dbh->prepare("UPDATE history SET TYPE=?, EVENT=?, VALUE=?, UNIT=? WHERE (TIMESTAMP=?) AND (DEVICE=?) AND (READING=?)"); + } + } + + if (lc($DbLogType) =~ m(current) ) { + # insert current mit/ohne primary key + if ($usepkc && $dbloghash->{MODEL} eq 'MYSQL') { + eval { $sth_ic = $dbh->prepare("INSERT IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkc && $dbloghash->{MODEL} eq 'SQLITE') { + eval { $sth_ic = $dbh->prepare("INSERT OR IGNORE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } elsif ($usepkc && $dbloghash->{MODEL} eq 'POSTGRESQL') { + eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT DO NOTHING"); }; + } else { + # old behavior + eval { $sth_ic = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); }; + } + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + return ($wrt,$irowdone,$err); + } + # update current mit/ohne primary key + if ($usepkc && $dbloghash->{MODEL} eq 'MYSQL') { + $sth_uc = $dbh->prepare("REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkc && $dbloghash->{MODEL} eq 'SQLITE') { + $sth_uc = $dbh->prepare("INSERT OR REPLACE INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?)"); + } elsif ($usepkc && $dbloghash->{MODEL} eq 'POSTGRESQL') { + $sth_uc = $dbh->prepare("INSERT INTO current (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES (?,?,?,?,?,?,?) ON CONFLICT ($pkc) + DO UPDATE SET TIMESTAMP=EXCLUDED.TIMESTAMP, DEVICE=EXCLUDED.DEVICE, TYPE=EXCLUDED.TYPE, EVENT=EXCLUDED.EVENT, READING=EXCLUDED.READING, + VALUE=EXCLUDED.VALUE, UNIT=EXCLUDED.UNIT"); + } else { + $sth_uc = $dbh->prepare("UPDATE current SET TIMESTAMP=?, TYPE=?, EVENT=?, VALUE=?, UNIT=? WHERE (DEVICE=?) AND (READING=?)"); + } + } + + eval { $dbh->begin_work() if($dbh->{AutoCommit}); }; + if ($@) { + Log3($name, 2, "DbRep $name -> Error start transaction for history - $@"); + } + + Log3 $hash->{NAME}, 5, "DbRep $name - data prepared to db write:"; + + # SQL-Startzeit + my $wst = [gettimeofday]; + + my $ihs = 0; + my $uhs = 0; + foreach my $row (@row_array) { + my ($date,$time,$device,$type,$event,$reading,$value,$unit) = ($row =~ /^(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)_ESC_(.*)$/); + Log3 $hash->{NAME}, 5, "DbRep $name - $row"; + my $timestamp = $date." ".$time; + + eval { + # update oder insert history + if (lc($DbLogType) =~ m(history) ) { + my $rv_uh = 0; + if($histupd) { + $rv_uh = $sth_uh->execute($type,$event,$value,$unit,$timestamp,$device,$reading); + } + if ($rv_uh == 0) { + $sth_ih->execute($timestamp,$device,$type,$event,$reading,$value,$unit); + $ihs++; + } else { + $uhs++; + } + } + # update oder insert current + if (lc($DbLogType) =~ m(current) ) { + my $rv_uc = $sth_uc->execute($timestamp,$type,$event,$value,$unit,$device,$reading); + if ($rv_uc == 0) { + $sth_ic->execute($timestamp,$device,$type,$event,$reading,$value,$unit); + } + } + }; + + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $@"); + $dbh->rollback; + $ihs = 0; + $uhs = 0; + return ($wrt,0,$err); + } else { + $irowdone++; + } + } + + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + + Log3 $hash->{NAME}, 3, "DbRep $name - number of lines updated in \"$dblogname\": $uhs" if($uhs); + Log3 $hash->{NAME}, 3, "DbRep $name - number of lines inserted into \"$dblogname\": $ihs" if($ihs); + + # SQL-Laufzeit ermitteln + $wrt = tv_interval($wst); + +return ($wrt,$irowdone,$err); +} + +################################################################ +# check ob primary key genutzt wird +################################################################ +sub DbRep_checkUsePK ($$$){ + my ($hash,$dbloghash,$dbh) = @_; + my $name = $hash->{NAME}; + my $dbconn = $dbloghash->{dbconn}; + my $upkh = 0; + my $upkc = 0; + my (@pkh,@pkc); + + my $db = (split("=",(split(";",$dbconn))[0]))[1]; + eval {@pkh = $dbh->primary_key( undef, undef, 'history' );}; + eval {@pkc = $dbh->primary_key( undef, undef, 'current' );}; + my $pkh = (!@pkh || @pkh eq "")?"none":join(",",@pkh); + my $pkc = (!@pkc || @pkc eq "")?"none":join(",",@pkc); + $pkh =~ tr/"//d; + $pkc =~ tr/"//d; + $upkh = 1 if(@pkh && @pkh ne "none"); + $upkc = 1 if(@pkc && @pkc ne "none"); + Log3 $hash->{NAME}, 5, "DbRep $name -> Primary Key used in $db.history: $pkh"; + Log3 $hash->{NAME}, 5, "DbRep $name -> Primary Key used in $db.current: $pkc"; + +return ($upkh,$upkc,$pkh,$pkc); +} + +################################################################ +# extrahiert aus dem übergebenen Wert nur die Zahl +################################################################ +sub DbRep_numval ($){ + my ($val) = @_; + return undef if(!defined($val)); + $val = ($val =~ /(-?\d+(\.\d+)?)/ ? $1 : ""); + +return $val; +} + +#################################################################################################### +# blockierende DB-Abfrage +# liefert Ergebnis sofort zurück, setzt keine Readings +#################################################################################################### +sub DbRep_dbValue($$) { + my ($name,$cmd) = @_; + my $hash = $defs{$name}; + my $dbloghash = $hash->{dbloghash}; + my $dbconn = $dbloghash->{dbconn}; + my $dbuser = $dbloghash->{dbuser}; + my $dblogname = $dbloghash->{NAME}; + my $dbpassword = $attr{"sec$dblogname"}{secret}; + my $utf8 = defined($hash->{UTF8})?$hash->{UTF8}:0; + my $srs = AttrVal($name, "sqlResultFieldSep", "|"); + my ($err,$ret,$dbh); + + readingsDelete($hash, "errortext"); + ReadingsSingleUpdateValue ($hash, "state", "running", 1); + + eval {$dbh = DBI->connect("dbi:$dbconn", $dbuser, $dbpassword, { PrintError => 0, RaiseError => 1, AutoCommit => 1, AutoInactiveDestroy => 1, mysql_enable_utf8 => $utf8 });}; + + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $err"); + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return ($err); + } + + my $sql = ($cmd =~ m/\;$/)?$cmd:$cmd.";"; + + # Ausgaben + Log3 ($name, 4, "DbRep $name - -------- New selection --------- "); + Log3 ($name, 4, "DbRep $name - Command: dbValue"); + Log3 ($name, 4, "DbRep $name - SQL execute: $sql"); + + # SQL-Startzeit + my $st = [gettimeofday]; + + my ($sth,$r); + eval {$sth = $dbh->prepare($sql); + $r = $sth->execute(); + }; + + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $err"); + $dbh->disconnect; + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return ($err); + } + + my $nrows = 0; + if($sql =~ m/^\s*(select|pragma|show)/is) { + while (my @line = $sth->fetchrow_array()) { + Log3 ($name, 4, "DbRep $name - SQL result: @line"); + $ret .= join("$srs", @line); + $ret .= "\n"; + # Anzahl der Datensätze + $nrows++; + } + + } else { + $nrows = $sth->rows; + eval {$dbh->commit() if(!$dbh->{AutoCommit});}; + if ($@) { + $err = $@; + Log3 ($name, 2, "DbRep $name - $err"); + $dbh->disconnect; + ReadingsSingleUpdateValue ($hash, "errortext", $err, 1); + ReadingsSingleUpdateValue ($hash, "state", "error", 1); + return ($err); + } + $ret = $nrows; + } + + $sth->finish; + $dbh->disconnect; + + # SQL-Laufzeit ermitteln + my $rt = tv_interval($st); + + my $com = (split(" ",$sql, 2))[0]; + Log3 ($name, 4, "DbRep $name - Number of entries processed in db $hash->{DATABASE}: $nrows by $com"); + + # Readingaufbereitung + readingsBeginUpdate($hash); + ReadingsBulkUpdateTimeState($hash,undef,$rt,"done"); + readingsEndUpdate($hash, 1); + +return ($ret); +} + +#################################################################################################### +# blockierende DB-Abfrage +# liefert den Wert eines Device:Readings des nächsmöglichen Logeintrags zum +# angegebenen Zeitpunkt +# +# Aufruf: DbReadingsVal("","",","") +#################################################################################################### +sub DbReadingsVal($$$$) { + my ($name, $devread, $ts, $default) = @_; + my $hash = $defs{$name}; + my $dbmodel = $defs{$hash->{HELPER}{DBLOGDEVICE}}{MODEL}; + my ($err,$ret,$sql); + + unless(defined($defs{$name})) { + return ("DbRep-device \"$name\" doesn't exist."); + } + unless($defs{$name}{TYPE} eq "DbRep") { + return ("\"$name\" is not a DbRep-device but of type \"".$defs{$name}{TYPE}."\""); + } + unless($ts =~ /^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})$/) { + return ("timestamp has not a valid format. Use \"YYYY-MM-DD hh:mm:ss\" as timestamp."); + } + my ($dev,$reading) = split(":",$devread); + unless($dev && $reading) { + return ("device:reading must be specified !"); + } + + if($dbmodel eq "MYSQL") { + $sql = "select value from ( + ( select *, TIMESTAMPDIFF(SECOND, '$ts', timestamp) as diff from history + where device='$dev' and reading='$reading' and timestamp >= '$ts' order by timestamp asc limit 1 + ) + union + ( select *, TIMESTAMPDIFF(SECOND, timestamp, '$ts') as diff from history + where device='$dev' and reading='$reading' and timestamp < '$ts' order by timestamp desc limit 1 + ) + ) x order by diff limit 1;"; + + } elsif ($dbmodel eq "SQLITE") { + $sql = "select value from ( + select value, (julianday(timestamp) - julianday('$ts')) * 86400.0 as diff from history + where device='MyWetter' and reading='temperature' and timestamp >= '$ts' + union + select value, (julianday('$ts') - julianday(timestamp)) * 86400.0 as diff from history + where device='MyWetter' and reading='temperature' and timestamp < '$ts' + ) + x order by diff limit 1;"; + + } elsif ($dbmodel eq "POSTGRESQL") { + $sql = "select value from ( + select value, EXTRACT(EPOCH FROM (timestamp - '$ts')) as diff from history + where device='MyWetter' and reading='temperature' and timestamp >= '$ts' + union + select value, EXTRACT(EPOCH FROM ('$ts' - timestamp)) as diff from history + where device='MyWetter' and reading='temperature' and timestamp < '$ts' + ) + x order by diff limit 1;"; + } else { + return ("DbReadingsVal is not implemented for $dbmodel"); + } + + $hash->{LASTCMD} = "dbValue $sql"; + $ret = DbRep_dbValue($name,$sql); + $ret = $ret?$ret:$default; + +return $ret; +} + +#################################################################################################### +# Browser Refresh nach DB-Abfrage +#################################################################################################### +sub browser_refresh($) { + my ($hash) = @_; + RemoveInternalTimer($hash, "browser_refresh"); + {FW_directNotify("#FHEMWEB:WEB", "location.reload('true')", "")}; + # map { FW_directNotify("#FHEMWEB:$_", "location.reload(true)", "") } devspec2array("WEB.*"); +return; +} + +#################################################################################################### +# Test-Sub zu Testzwecken +#################################################################################################### +sub testexit ($) { +my ($hash) = @_; +my $name = $hash->{NAME}; + + if ( !DbRep_Connect($hash) ) { + Log3 ($name, 2, "DbRep $name - DB connect failed. Database down ? "); + ReadingsSingleUpdateValue ($hash, "state", "disconnected", 1); + return; + } else { + my $dbh = $hash->{DBH}; + Log3 ($name, 3, "DbRep $name - --------------- FILE INFO --------------"); + my $sqlfile = $dbh->sqlite_db_filename(); + Log3 ($name, 3, "DbRep $name - FILE : $sqlfile "); +# # $dbh->table_info( $catalog, $schema, $table) +# my $sth = $dbh->table_info('', '%', '%'); +# my $tables = $dbh->selectcol_arrayref($sth, {Columns => [3]}); +# my $table = join ', ', @$tables; +# Log3 ($name, 3, "DbRep $name - SQL_TABLES : $table"); + + Log3 ($name, 3, "DbRep $name - --------------- PRAGMA --------------"); + my @InfoTypes = ('sqlite_db_status'); + + + foreach my $row (@InfoTypes) { + # my @linehash = $dbh->$row; + + my $array= $dbh->$row ; + # push(@row_array, @array); + while ((my $key, my $val) = each %{$array}) { + Log3 ($name, 3, "DbRep $name - PRAGMA : $key : ".%{$val}); + } + + } + # $sth->finish; + + $dbh->disconnect; + } +return; +} + + +1; + +=pod +=item helper +=item summary Reporting & Management content of DbLog-DB's. Content is depicted as readings +=item summary_DE Reporting & Management von DbLog-DB Content. Darstellung als Readings +=begin html + + +

DbRep

+
    +
    + The purpose of this module is browsing and managing the content of DbLog-databases. The searchresults can be evaluated concerning to various aggregations and the appropriate + Readings will be filled. The data selection will been done by declaration of device, reading and the time settings of selection-begin and selection-end.

    + + Almost all database operations are implemented nonblocking. If there are exceptions it will be suggested to. + Optional the execution time of SQL-statements in background can also be determined and provided as reading. + (refer to attributes).
    + All existing readings will be deleted when a new operation starts. By attribute "readingPreventFromDel" a comma separated list of readings which are should prevent + from deletion can be provided.

    + + Currently the following functions are provided:

    + +
        +
      • Selection of all datasets within adjustable time limits.
      • +
      • Exposure of datasets of a Device/Reading-combination within adjustable time limits.
      • +
      • Selection of datasets by usage of dynamically calclated time limits at execution time.
      • +
      • Highlighting doublets when select and display datasets (fetchrows)
      • +
      • Calculation of quantity of datasets of a Device/Reading-combination within adjustable time limits and several aggregations.
      • +
      • The calculation of summary-, difference-, maximum-, minimum- and averageValues of numeric readings within adjustable time limits and several aggregations.
      • +
      • write back results of summary-, difference-, maximum-, minimum- and average calculation into the database
      • +
      • The deletion of datasets. The containment of deletion can be done by Device and/or Reading as well as fix or dynamically calculated time limits at execution time.
      • +
      • export of datasets to file (CSV-format).
      • +
      • import of datasets from file (CSV-Format).
      • +
      • rename of device/readings in datasets
      • +
      • change of reading values in the database (changeValue)
      • +
      • automatic rename of device names in datasets and other DbRep-definitions after FHEM "rename" command (see DbRep-Agent)
      • +
      • Execution of arbitrary user specific SQL-commands (non-blocking)
      • +
      • Execution of arbitrary user specific SQL-commands (blocking) for usage in user own code (dbValue)
      • +
      • creation of backups of the database in running state non-blocking (MySQL, SQLite)
      • +
      • transfer dumpfiles to a FTP server after backup incl. version control
      • +
      • restore of SQLite- and MySQL-Dumps non-blocking
      • +
      • optimize the connected database (optimizeTables, vacuum)
      • +
      • report of existing database processes (MySQL)
      • +
      • purge content of current-table
      • +
      • fill up the current-table with a (tunable) extract of the history-table
      • +
      • delete consecutive datasets with different timestamp but same values (clearing up consecutive doublets)
      • +
      • Repair of a corrupted SQLite database ("database disk image is malformed")
      • +
      • transmission of datasets from source database into another (Standby) database (syncStandby)
      • +
      • reduce the amount of datasets in database (reduceLog)
      • +
    +
    + + To activate the function Autorename the attribute "role" has to be assigned to a defined DbRep-device. The standard role after DbRep definition is "Client". + Please read more in section DbRep-Agent about autorename function.

    + + DbRep provides a UserExit function. With this interface the user can execute own program code dependent from free + definable Reading/Value-combinations (Regex). The interface works without respectively independent from event + generation. + Further informations you can find as described at attribute "userExitFn". +

    + + Once a DbRep-Device is defined, the function DbReadingsVal is provided. + With this function you can, similar to the well known ReadingsVal, get a reading value from database. + The function execution is carried out blocking. + The command syntax is:

    + +
      + DbReadingsVal("<name>","<device:reading>","<timestamp>","<default>")

      + + Examples:
      + $ret = DbReadingsVal("Rep.LogDB1","MyWetter:temperature","2018-01-13 08:00:00","");
      + attr <name> userReadings oldtemp {DbReadingsVal("Rep.LogDB1","MyWetter:temperature","2018-04-13 08:00:00","")} +

      + + + + + + + +
      <name> : name of the DbRep-Device to request
      <device:reading> : device:reading whose value is to deliver
      <timestamp> : timestamp of reading whose value is to deliver (*) in the form "YYYY-MM-DD hh:mm:ss"
      <default> : default value if no reading value can be retrieved
      +
    +
    + (*) If no value can be retrieved at the <timestamp> exactly requested, the chronological most convenient reading + value is delivered back. +

    + + FHEM-Forum:
    + Modul 93_DbRep - Reporting and Management of database content (DbLog).

    + +
    + + +
+ Preparations

+
    + The module requires the usage of a DbLog instance and the credentials of the database definition will be used.
    + Only the content of table "history" will be included if isn't other is explained.

    + + Overview which other Perl-modules DbRep is using:

    + + Net::FTP (only if FTP-Transfer after database dump is used)
    + Net::FTPSSL (only if FTP-Transfer with encoding after database dump is used)
    + POSIX
    + Time::HiRes
    + Time::Local
    + Scalar::Util
    + DBI
    + Color (FHEM-module)
    + IO::Compress::Gzip
    + IO::Uncompress::Gunzip
    + Blocking (FHEM-module)

    + + Due to performance reason the following index should be created in addition:
    + + CREATE INDEX Report_Idx ON `history` (TIMESTAMP, READING) USING BTREE; + +
+
+ + +Definition + +
+
    + + define <name> DbRep <name of DbLog-instance> + + +

    + (<name of DbLog-instance> - name of the database instance which is wanted to analyze needs to be inserted) + +
+ +

+ + +Set +
    + + Currently following set-commands are included. They are used to trigger the evaluations and define the evaluation option option itself. + The criteria of searching database content and determine aggregation is carried out by setting several attributes. +

    + +
        +
      • averageValue [display | writeToDB] + - calculates the average value of database column "VALUE" between period given by + timestamp-attributes which are set. + The reading to evaluate must be specified by attribute "reading".
        + By attribute "averageCalcForm" the calculation variant for average determination will be configured. + + Is no or the option "display" specified, the results are only displayed. Using + option "writeToDB" the calculated results are stored in the database with a new reading + name.
        + The new readingname is built of a prefix and the original reading name, + in which the original reading name can be replaced by the value of attribute "readingNameMap". + The prefix is made up of the creation function and the aggregation.
        + The timestamp of the new stored readings is deviated from aggregation period, + unless no unique point of time of the result can be determined. + The field "EVENT" will be filled with "calculated".

        + +
          + Example of building a new reading name from the original reading "totalpac":
          + avgam_day_totalpac
          + # <creation function>_<aggregation>_<original reading>
          +
        +

      • + +
      • cancelDump - stops a running database dump.

      • + +
      • changeValue - changes the saved value of readings. + If the selection is limited to particular device/reading-combinations by + attribute "device" respectively "reading", it is considered as well + as possibly defined time limits by time attributes (time.*).
        + If no limits are set, the whole database is scanned and the specified value will be + changed.

        + +
          + Syntax:
          + set <name> changeValue "<old string>","<new string>"

          + + The strings have to be quoted and separated by comma. + A "string" can be:
          + +
          +<old string> : * a simple string with/without spaces, e.g. "OL 12"
          +               * a string with usage of SQL-wildcard, e.g. "%OL%"
          +                 
          +<new string> : * a simple string with/without spaces, e.g. "12 kWh"
          +               * Perl code embedded in "{}" with quotes, e.g. "{($VALUE,$UNIT) = split(" ",$VALUE)}". 
          +                 The perl expression the variables $VALUE and $UNIT are committed to. The variables are changable within
          +                 the perl code. The returned value of VALUE and UNIT are saved into the database field 
          +                 VALUE respectively UNIT of the dataset.                        
          +
          + + Examples:
          + set <name> changeValue "OL","12 OL"
          + # the old field value "OL" is changed to "12 OL".

          + + set <name> changeValue "%OL%","12 OL"
          + # contains the field VALUE the substring "OL", it is changed to "12 OL".

          + + set <name> changeValue "12 kWh","{($VALUE,$UNIT) = split(" ",$VALUE)}"
          + # the old field value "12 kWh" is splitted to VALUE=12 and UNIT=kWh and saved into the database fields

          + + set <name> changeValue "24%","{$VALUE = (split(" ",$VALUE))[0]}"
          + # if the old field value begins with "24", it is splitted and VALUE=24 is saved (e.g. "24 kWh") +

          + + Summarized the relevant attributes to control function changeValue are:

          + +
            + + + + + + + +
            device : selection only of datasets which contain <device>
            reading : selection only of datasets which contain <reading>
            time.* : a number of attributes to limit selection by time
            executeBeforeProc : execute a FHEM command (or perl-routine) before start of changeValue
            executeAfterProc : execute a FHEM command (or perl-routine) after changeValue is finished
            +
          +
          +
          + + Note:
          + Even though the function itself is designed non-blocking, make sure the assigned DbLog-device + is operating in asynchronous mode to avoid FHEMWEB from blocking.

          +
          +
        + +
      • countEntries [history|current] - provides the number of table-entries (default: history) between period set + by timestamp-attributes if set. + If timestamp-attributes are not set, all entries of the table will be count. + The attributes "device" and "reading" can be used to + limit the evaluation.

      • + + +
      • delEntries - deletes all database entries or only the database entries specified by attributes Device and/or + Reading and the entered time period between "timestamp_begin", "timestamp_end" (if set) or "timeDiffToNow/timeOlderThan".

        + +
          + "timestamp_begin" is set -> deletes db entries from this timestamp until current date/time
          + "timestamp_end" is set -> deletes db entries until this timestamp
          + both Timestamps are set -> deletes db entries between these timestamps
          + "timeOlderThan" is set -> delete entries older than current time minus "timeOlderThan"
          + "timeDiffToNow" is set -> delete db entries from current time minus "timeDiffToNow" until now
          + +
          + Due to security reasons the attribute attribute "allowDeletion" needs to be set to unlock the + delete-function.
          + + The relevant attributes to control function changeValue delEntries are:

          + +
            + + + + + + + + +
            allowDeletion : unlock the delete function
            device : selection only of datasets which contain <device>
            reading : selection only of datasets which contain <reading>
            time.* : a number of attributes to limit selection by time
            executeBeforeProc : execute a FHEM command (or perl-routine) before start of delEntries
            executeAfterProc : execute a FHEM command (or perl-routine) after delEntries is finished
            +
          +
          +
          + + +
          +
        + +
      • delSeqDoublets [adviceRemain | adviceDelete | delete] - show respectively delete identical sequentially datasets. + Therefore Device,Reading and Value of the sequentially datasets are compared. + Not deleted are the first und the last dataset of a aggregation period (e.g. hour,day,week and so on) as + well as the datasets before or after a value change (database field VALUE).
        + The attributes to define the scope of aggregation,time period, device and reading are + considered. If attribute aggregation is not set or set to "no", it will change to the default aggregation + period "day". For datasets containing numerical values it is possible to determine a variance with attribute + "seqDoubletsVariance". Up to this value consecutive numerical datasets are handled as identical and should be + deleted. +

        + +
          + + + + + +
          adviceRemain : simulates the remaining datasets in database after delete-operation (nothing will be deleted !)
          adviceDelete : simulates the datasets to delete in database (nothing will be deleted !)
          delete : deletes the consecutive doublets (see example)
          +
        +
        + + Due to security reasons the attribute attribute "allowDeletion" needs to be set for + execute the "delete" option.
        + The amount of datasets to show by commands "delSeqDoublets adviceDelete", "delSeqDoublets adviceRemain" is + initially limited (default: 1000) and can be adjusted by attribute "limit". + The adjustment of "limit" has no impact to the "delSeqDoublets delete" function, but affects ONLY the + display of the data.
        + Before and after this "delSeqDoublets" it is possible to execute a FHEM command or Perl-script + (please see attributes "executeBeforeProc" and "executeAfterProc"). +

        + +
          + Example - the remaining datasets after executing delete-option are are marked as bold:

          +
            + 2017-11-25_00-00-05__eg.az.fridge_Pwr__power 0
            + 2017-11-25_00-02-26__eg.az.fridge_Pwr__power 0
            + 2017-11-25_00-04-33__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-06-10__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-08-21__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-08-59__eg.az.fridge_Pwr__power 60.32
            + 2017-11-25_01-11-21__eg.az.fridge_Pwr__power 56.26
            + 2017-11-25_01-27-54__eg.az.fridge_Pwr__power 6.19
            + 2017-11-25_01-28-51__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-31-00__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-33-59__eg.az.fridge_Pwr__power 0
            + 2017-11-25_02-39-29__eg.az.fridge_Pwr__power 0
            + 2017-11-25_02-41-18__eg.az.fridge_Pwr__power 105.28
            + 2017-11-25_02-41-26__eg.az.fridge_Pwr__power 61.52
            + 2017-11-25_03-00-06__eg.az.fridge_Pwr__power 47.46
            + 2017-11-25_03-00-33__eg.az.fridge_Pwr__power 0
            + 2017-11-25_03-02-07__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-37-42__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-40-10__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-42-24__eg.az.fridge_Pwr__power 1
            + 2017-11-25_23-42-24__eg.az.fridge_Pwr__power 1
            + 2017-11-25_23-45-27__eg.az.fridge_Pwr__power 1
            + 2017-11-25_23-47-07__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-55-27__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-48-15__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-50-21__eg.az.fridge_Pwr__power 59.1
            + 2017-11-25_23-55-14__eg.az.fridge_Pwr__power 52.31
            + 2017-11-25_23-58-09__eg.az.fridge_Pwr__power 51.73
            +
          +
        + +
      • +
        +
        + +
      • deviceRename - renames the device name of a device inside the connected database (Internal DATABASE). + The devicename will allways be changed in the entire database. Possibly set time limits or restrictions by + attributes device and/or reading will not be considered.

        + +
          + Example:
          + set <name> deviceRename ST_5000,ST5100
          + # The amount of renamed device names (datasets) will be displayed in reading "device_renamed".
          + # If the device name to be renamed was not found in the database, a WARNUNG will appear in reading "device_not_renamed".
          + # Appropriate entries will be written to Logfile if verbose >= 3 is set. +

          + + Note:
          + Even though the function itself is designed non-blocking, make sure the assigned DbLog-device + is operating in asynchronous mode to avoid FHEMWEB from blocking.

          +
          +
        + +
      • diffValue [display | writeToDB] + - calculates the difference of database column "VALUE" between period given by + attributes "timestamp_begin", "timestamp_end" or "timeDiffToNow / timeOlderThan". + The reading to evaluate must be defined using attribute "reading". + This function is mostly reasonable if readingvalues are increasing permanently and don't write value-differences to the database. + The difference will be generated from the first available dataset (VALUE-Field) to the last available dataset between the + specified time linits/aggregation, in which a balanced difference value of the previous aggregation period will be transfered to the + following aggregation period in case this period contains a value.
        + An possible counter overrun (restart with value "0") will be considered (compare attribute "diffAccept").

        + + If only one dataset will be found within the evalution period, the difference can be calculated only in combination with the balanced + difference of the previous aggregation period. In this case a logical inaccuracy according the assignment of the difference to the particular aggregation period + can be possible. Hence in warning in "state" will be placed and the reading "less_data_in_period" with a list of periods + with only one dataset found in it will be created. +

        + +
          + Note:
          + Within the evaluation respectively aggregation period (day, week, month, etc.) you should make available at least one dataset + at the beginning and one dataset at the end of each aggregation period to take the difference calculation as much as possible. +
          +
          +
        + + Is no or the option "display" specified, the results are only displayed. Using + option "writeToDB" the calculation results are stored in the database with a new reading + name.
        + The new readingname is built of a prefix and the original reading name, + in which the original reading name can be replaced by the value of attribute "readingNameMap". + The prefix is made up of the creation function and the aggregation.
        + The timestamp of the new stored readings is deviated from aggregation period, + unless no unique point of time of the result can be determined. + The field "EVENT" will be filled with "calculated".

        + +
          + Example of building a new reading name from the original reading "totalpac":
          + diff_day_totalpac
          + # <creation function>_<aggregation>_<original reading>
          +
        +

      • + + +
      • dumpMySQL [clientSide | serverSide] + - creates a dump of the connected MySQL database.
        + Depending from selected option the dump will be created on Client- or on Server-Side.
        + The variants differs each other concerning the executing system, the creating location, the usage of + attributes, the function result and the needed hardware ressources.
        + The option "clientSide" e.g. needs more powerful FHEM-Server hardware, but saves all available + tables inclusive possibly created views.
        + With attribute "dumpCompress" a compression of dump file after creation can be switched on. +

        + +
          + Option clientSide
          + The dump will be created by client (FHEM-Server) and will be saved in FHEM log-directory by + default. + The target directory can be set by attribute "dumpDirLocal" and has to be + writable by the FHEM process.
          + Before executing the dump a table optimization can be processed optionally (see attribute + "optimizeTablesBeforeDump") as well as a FHEM-command (attribute "executeBeforeProc"). + After the dump a FHEM-command can be executed as well (see attribute "executeAfterProc").

          + + Note:
          + To avoid FHEM from blocking, you have to operate DbLog in asynchronous mode if the table + optimization want to be used !


          + + By the attributes "dumpMemlimit" and "dumpSpeed" the run-time behavior of the function can be + controlled to optimize the performance and demand of ressources.

          + + The attributes relevant for function "dumpMySQL clientSide" are:

          +
            + + + + + + + + + + + +
            dumpComment : User comment in head of dump file
            dumpCompress : compress of dump files after creation
            dumpDirLocal : the local destination directory for dump file creation
            dumpMemlimit : limits memory usage
            dumpSpeed : limits CPU utilization
            dumpFilesKeep : number of dump files to keep
            executeBeforeProc : execution of FHEM command (or perl-routine) before dump
            executeAfterProc : execution of FHEM command (or perl-routine) after dump
            optimizeTablesBeforeDump : table optimization before dump
            +
          +
          + + After a successfull finished dump the old dumpfiles are deleted and only the number of files + defined by attribute "dumpFilesKeep" (default: 3) remain in the target directory + "dumpDirLocal". If "dumpFilesKeep = 0" is set, all + dumpfiles (also the current created file), are deleted. This setting can be helpful, if FTP transmission is used + and the created dumps are only keep remain in the FTP destination directory.

          + + The naming convention of dump files is: <dbname>_<date>_<time>.sql[.gzip]

          + + To rebuild the database from a dump file the command:

          + +
            + set <name> restoreMySQL <filename>

            +
          + + can be used.

          + + The created dumpfile (uncompressed) can imported on the MySQL-Server by:

          + +
            + mysql -u <user> -p <dbname> < <filename>.sql

            +
          + + as well to restore the database from dump file.


          + + + Option serverSide
          + The dump will be created on the MySQL-Server and will be saved in its Home-directory + by default.
          + The whole history-table (not the current-table) will be exported CSV-formatted without + any restrictions.
          + + Before executing the dump a table optimization can be processed optionally (see attribute + "optimizeTablesBeforeDump") as well as a FHEM-command (attribute "executeBeforeProc").

          + + Note:
          + To avoid FHEM from blocking, you have to operate DbLog in asynchronous mode if the table + optimization want to be used !


          + + After the dump a FHEM-command can be executed as well (see attribute "executeAfterProc").

          + + The attributes relevant for function "dumpMySQL serverSide" are:

          +
            + + + + + + + + + +
            dumpDirRemote : destination directory of dump file on remote server
            dumpCompress : compress of dump files after creation
            dumpDirLocal : the local mounted directory dumpDirRemote
            dumpFilesKeep : number of dump files to keep
            executeBeforeProc : execution of FHEM command (or perl-routine) before dump
            executeAfterProc : execution of FHEM command (or perl-routine) after dump
            optimizeTablesBeforeDump : table optimization before dump
            +
          +
          + + The target directory can be set by attribute "dumpDirRemote". + It must be located on the MySQL-Host and has to be writable by the MySQL-server process.
          + The used database user must have the "FILE"-privilege.

          + + Note:
          + If the internal version management of DbRep should be used and the size of the created dumpfile be + reported, you have to mount the remote MySQL-Server directory "dumpDirRemote" on the client + and publish it to the DbRep-device by fill out the attribute + "dumpDirLocal".
          + Same is necessary if ftp transfer after dump is to be used (attribute "ftpUse" respectively "ftpUseSSL"). +

          + +
            + Example:
            + attr <name> dumpDirRemote /volume1/ApplicationBackup/dumps_FHEM/
            + attr <name> dumpDirLocal /sds1/backup/dumps_FHEM/
            + attr <name> dumpFilesKeep 2

            + + # The dump will be created remote on the MySQL-Server in directory + '/volume1/ApplicationBackup/dumps_FHEM/'.
            + # The internal version management searches in local mounted directory '/sds1/backup/dumps_FHEM/' + for present dumpfiles and deletes these files except the last two versions.
            +
            +
          + + If the internal version management is used, after a successfull finished dump old dumpfiles will + be deleted and only the number of attribute "dumpFilesKeep" (default: 3) would remain in target + directory "dumpDirLocal" (the mounted "dumpDirRemote"). + In that case FHEM needs write permissions to the directory "dumpDirLocal".

          + + The naming convention of dump files is: <dbname>_<date>_<time>.csv[.gzip]

          + + You can start a restore of table history from serverSide-Backup by command:

          +
            + set <name> <restoreMySQL> <filename>.csv[.gzip]

            +
          + +

          + + FTP-Transfer after Dump
          + If those possibility is be used, the attribute "ftpUse" or + "ftpUseSSL" has to be set. The latter if encoding for FTP is to be used. + The module also carries the version control of dump files in FTP-destination by attribute + "ftpDumpFilesKeep".
          + Further attributes are:

          + +
            + + + + + + + + + + + + + +
            ftpUse : FTP Transfer after dump will be switched on (without SSL encoding)
            ftpUser : User for FTP-server login, default: anonymous
            ftpUseSSL : FTP Transfer with SSL encoding after dump
            ftpDebug : debugging of FTP communication for diagnostics
            ftpDir : directory on FTP-server in which the file will be send into (default: "/")
            ftpDumpFilesKeep : leave the number of dump files in FTP-destination <ftpDir> (default: 3)
            ftpPassive : set if passive FTP is to be used
            ftpPort : FTP-Port, default: 21
            ftpPwd : password of FTP-User, not set by default
            ftpServer : name or IP-address of FTP-server. absolutely essential !
            ftpTimeout : timeout of FTP-connection in seconds (default: 30).
            +
          +
          +
          +
        + +

      • + +
      • dumpSQLite - creates a dump of the connected SQLite database.
        + This function uses the SQLite Online Backup API and allow to create a consistent backup of the + database during the normal operation. + The dump will be saved in FHEM log-directory by default. + The target directory can be defined by attribute "dumpDirLocal" and + has to be writable by the FHEM process.
        + Before executing the dump a table optimization can be processed optionally (see attribute + "optimizeTablesBeforeDump"). +

        + + Note:
        + To avoid FHEM from blocking, you have to operate DbLog in asynchronous mode if the table + optimization want to be used !


        + + Before and after the dump a FHEM-command can be executed (see attribute "executeBeforeProc", + "executeAfterProc").

        + + The attributes relevant for function "dumpMySQL serverSide" are:

        +
          + + + + + + + + +
          dumpCompress : compress of dump files after creation
          dumpDirLocal : the local mounted directory dumpDirRemote
          dumpFilesKeep : number of dump files to keep
          executeBeforeProc : execution of FHEM command (or perl-routine) before dump
          executeAfterProc : execution of FHEM command (or perl-routine) after dump
          optimizeTablesBeforeDump : table optimization before dump
          +
        +
        + + After a successfull finished dump the old dumpfiles are deleted and only the number of attribute + "dumpFilesKeep" (default: 3) remain in the target directory "dumpDirLocal". If "dumpFilesKeep = 0" is set, all + dumpfiles (also the current created file), are deleted. This setting can be helpful, if FTP transmission is used + and the created dumps are only keep remain in the FTP destination directory.

        + + The naming convention of dump files is: <dbname>_<date>_<time>.sqlitebkp[.gzip]

        + + The database can be restored by command "set <name> restoreSQLite <filename>"
        + The created dump file can be transfered to a FTP-server. Please see explanations about FTP- + transfer in topic "dumpMySQL".

        +

      • + +
      • eraseReadings - deletes all created readings in the device, except reading "state" and readings, which are + contained in exception list defined by attribute "readingPreventFromDel". +

      • + +
      • exportToFile [<file>] + - exports DB-entries to a file in CSV-format of time period specified by time attributes.
        + Limitation of selections can be done by attributes device and/or + reading. + The filename can be defined by attribute "expimpfile".
        + Optionally a file can be specified as a command option (/path/file) and overloads a possibly + defined attribute "expimpfile". The filename may contain wildcards as described + in attribute section of "expimpfile". +
        + By setting attribute "aggregation" the export of datasets will be splitted into time slices + corresponding to the specified aggregation. + If, for example, "aggregation = month" is set, the data are selected in monthly packets and written + into the exportfile. Thereby the usage of main memory is optimized if very large amount of data + is exported and avoid the "died prematurely" error.

        + + The attributes relevant for this function are:

        +
          + + + + + + + + + +
          aggregation : determination of selection time slices
          device : select only datasets which are contain <device>
          reading : select only datasets which are contain <reading>
          executeBeforeProc : execution of FHEM command (or perl-routine) before export
          executeAfterProc : execution of FHEM command (or perl-routine) after export
          expimpfile : the name of exportfile
          time.* : a number of attributes to limit selection by time
          +
        + +

      • + +
      • fetchrows [history|current] + - provides all table entries (default: history) + of time period set by time attributes respectively selection conditions + by attributes "device" and "reading". + An aggregation set will not be considered.
        + The direction of data selection can be determined by attribute + "fetchRoute".

        + + Every reading of result is composed of the dataset timestring , an index, the device name + and the reading name. + The function has the capability to reconize multiple occuring datasets (doublets). + Such doublets are marked by an index > 1.
        + Doublets can be highlighted in terms of color by setting attribut e"fetchMarkDuplicates".

        + + Note:
        + Highlighted readings are not displayed again after restart or rereadcfg because of they are not + saved in statefile.

        + + This attribute is preallocated with some colors, but can be changed by colorpicker-widget:

        + +
          + + attr <DbRep-Device> widgetOverride fetchMarkDuplicates:colorpicker + +
        +
        + + The readings of result are composed like the following sceme:

        + +
          + Example:
          + 2017-10-22_03-04-43__1__SMA_Energymeter__Bezug_WirkP_Kosten_Diff
          + # <date>_<time>__<index>__<device>__<reading> +
        +
        + + For a better overview the relevant attributes are listed here in a table:

        + +
          + + + + + + + + + +
          fetchRoute : direction of selection read in database
          limit : limits the number of datasets to select and display
          fetchMarkDuplicates : Highlighting of found doublets
          device : select only datasets which are contain <device>
          reading : select only datasets which are contain <reading>
          time.* : A number of attributes to limit selection by time
          valueFilter : filter datasets which are to show by a regular expression
          +
        +
        +
        + + Note:
        + Although the module is designed non-blocking, a huge number of selection result (huge number of rows) + can overwhelm the browser session respectively FHEMWEB. + Due to the sample space can be limited by attribute "limit". + Of course ths attribute can be increased if your system capabilities allow a higher workload.

        +

      • + +
      • insert - use it to insert data ito table "history" manually. Input values for Date, Time and Value are mandatory. The database fields for Type and Event will be filled in with "manual" automatically and the values of Device, Reading will be get from set attributes.

        + +
          + input format: Date,Time,Value,[Unit]
          + # Unit is optional, attributes of device, reading must be set !
          + # If "Value=0" has to be inserted, use "Value = 0.0" to do it.

          + + example: 2016-08-01,23:00:09,TestValue,TestUnit
          + # Spaces are NOT allowed in fieldvalues !
          +
          + + Note:
          + Please consider to insert AT LEAST two datasets into the intended time / aggregatiom period (day, week, month, etc.) because of + it's needed by function diffValue. Otherwise no difference can be calculated and diffValue will be print out "0" for the respective period ! +
          +
          + +
        + +
      • importFromFile [<file>] + - imports data in CSV format from file into database.
        + The filename can be defined by attribute "expimpfile".
        + Optionally a file can be specified as a command option (/path/file) and overloads a possibly + defined attribute "expimpfile". The filename may contain wildcards as described + in attribute section of "expimpfile".

        + +
          + dataset format:
          + "TIMESTAMP","DEVICE","TYPE","EVENT","READING","VALUE","UNIT"

          + # The fields "TIMESTAMP","DEVICE","TYPE","EVENT","READING" and "VALUE" have to be set. The field "UNIT" is optional. + The file content will be imported transactional. That means all of the content will be imported or, in case of error, nothing of it. + If an extensive file will be used, DON'T set verbose = 5 because of a lot of datas would be written to the logfile in this case. + It could lead to blocking or overload FHEM !

          + + Example for a source dataset:
          + "2016-09-25 08:53:56","STP_5000","SMAUTILS","etotal: 11859.573","etotal","11859.573",""
          +
          + + The attributes relevant for this function are:

          +
            + + + + + +
            executeBeforeProc : execution of FHEM command (or perl-routine) before import
            executeAfterProc : execution of FHEM command (or perl-routine) after import
            expimpfile : the name of exportfile
            +
          + +
          +
        +
        + +
      • maxValue [display | writeToDB] + - calculates the maximum value of database column "VALUE" between period given by + attributes "timestamp_begin", "timestamp_end" or "timeDiffToNow / timeOlderThan". + The reading to evaluate must be defined using attribute "reading". + The evaluation contains the timestamp of the last appearing of the identified maximum value + within the given period.
        + + Is no or the option "display" specified, the results are only displayed. Using + option "writeToDB" the calculated results are stored in the database with a new reading + name.
        + The new readingname is built of a prefix and the original reading name, + in which the original reading name can be replaced by the value of attribute "readingNameMap". + The prefix is made up of the creation function and the aggregation.
        + The timestamp of the new stored readings is deviated from aggregation period, + unless no unique point of time of the result can be determined. + The field "EVENT" will be filled with "calculated".

        + +
          + Example of building a new reading name from the original reading "totalpac":
          + max_day_totalpac
          + # <creation function>_<aggregation>_<original reading>
          +
        +

      • + +
      • minValue [display | writeToDB] + - calculates the minimum value of database column "VALUE" between period given by + attributes "timestamp_begin", "timestamp_end" or "timeDiffToNow / timeOlderThan". + The reading to evaluate must be defined using attribute "reading". + The evaluation contains the timestamp of the first appearing of the identified minimum + value within the given period.
        + + Is no or the option "display" specified, the results are only displayed. Using + option "writeToDB" the calculated results are stored in the database with a new reading + name.
        + The new readingname is built of a prefix and the original reading name, + in which the original reading name can be replaced by the value of attribute "readingNameMap". + The prefix is made up of the creation function and the aggregation.
        + The timestamp of the new stored readings is deviated from aggregation period, + unless no unique point of time of the result can be determined. + The field "EVENT" will be filled with "calculated".

        + +
          + Example of building a new reading name from the original reading "totalpac":
          + min_day_totalpac
          + # <creation function>_<aggregation>_<original reading>
          +
        +

      • + +
      • optimizeTables - optimize tables in the connected database (MySQL).
        + Before and after an optimization it is possible to execute a FHEM command. + (please see attributes "executeBeforeProc", "executeAfterProc") +

        + +
          + Note:
          + Even though the function itself is designed non-blocking, make sure the assigned DbLog-device + is operating in asynchronous mode to avoid FHEMWEB from blocking.

          +
          +
        + +
      • readingRename - renames the reading name of a device inside the connected database (see Internal DATABASE). + The readingname will allways be changed in the entire database. Possibly set time limits or restrictions by + attributes device and/or reading will not be considered.

        + +
          + Example:
          + set <name> readingRename <old reading name>,<new reading name>
          + # The amount of renamed reading names (datasets) will be displayed in reading "reading_renamed".
          + # If the reading name to be renamed was not found in the database, a WARNUNG will appear in reading "reading_not_renamed".
          + # Appropriate entries will be written to Logfile if verbose >= 3 is set. +

          + + Note:
          + Even though the function itself is designed non-blocking, make sure the assigned DbLog-device + is operating in asynchronous mode to avoid FHEMWEB from blocking.

          +
          +
        + +
      • repairSQLite - repairs a corrupted SQLite database.
        + A corruption is usally existent when the error message "database disk image is malformed" + appears in reading "state" of the connected DbLog-device. + If the command was started, the connected DbLog-device will firstly disconnected from the + database for 10 hours (36000 seconds) automatically (breakup time). After the repair is + finished, the DbLog-device will be connected to the (repaired) database immediately.
        + As an argument the command can be completed by a differing breakup time (in seconds).
        + The corrupted database is saved as <database>.corrupt in same directory.

        + +
          + Example:
          + set <name> repairSQLite
          + # the database is trying to repair, breakup time is 10 hours
          + set <name> repairSQLite 600
          + # the database is trying to repair, breakup time is 10 minutes +

          + + Note:
          + It can't be guaranteed, that the repair attempt proceed successfully and no data loss will result. + Depending from corruption severity data loss may occur or the repair will fail even though + no error appears during the repair process. Please make sure a valid backup took place !

          +
          +
        + +
      • restoreMySQL <File> - restore a database from serverSide- or clientSide-Dump.
        + The function provides a drop-down-list of files which can be used for restore.

        + + Usage of serverSide-Dumps
        + The content of history-table will be restored from a serverSide-Dump. + Therefore the remote directory "dumpDirRemote" of the MySQL-Server has to be mounted on the + Client and make it usable to the DbRep-device by setting attribute + "dumpDirLocal" to the appropriate value.
        + All files with extension "csv[.gzip]" and if the filename is beginning with the name of the connected database + (see Internal DATABASE) are listed. +

        + + Usage of clientSide-Dumps
        + All tables and views (if present) are restored. + The directory which contains the dump files has to be set by attribute + "dumpDirLocal" to make it usable by the DbRep device.
        + All files with extension "sql[.gzip]" and if the filename is beginning with the name of the connected database + (see Internal DATABASE) are listed.
        + The restore speed depends of the server variable "max_allowed_packet". You can change + this variable in file my.cnf to adapt the speed. Please consider the need of sufficient ressources + (especially RAM). +

        + + The database user needs rights for database management, e.g.:
        + CREATE, ALTER, INDEX, DROP, SHOW VIEW, CREATE VIEW +

        +

      • + +
      • restoreSQLite <File>.sqlitebkp[.gzip] - restores a backup of SQLite database.
        + The function provides a drop-down-list of files which can be used for restore. + The data stored in the current database are deleted respectively overwritten. + All files with extension "sqlitebkp[.gzip]" and if the filename is beginning with the name of the connected database + will are listed.

        +

      • + +
      • sqlCmd - executes an arbitrary user specific command.
        + If the command contains a operation to delete data, the attribute + "allowDeletion" has to be set for security reason.
        + The statement doesn't consider limitations by attributes "device", "reading", "time.*" + respectively "aggregation".
        + If the attribute "timestamp_begin" respectively "timestamp_end" + is assumed in the statement, it is possible to use placeholder "§timestamp_begin§" respectively + "§timestamp_end§" on suitable place.

        + + If you want update a dataset, you have to add "TIMESTAMP=TIMESTAMP" to the update-statement to avoid changing the + original timestamp.

        + +
          + Examples of SQL-statements:

          +
            +
          • set <name> sqlCmd select DEVICE, count(*) from history where TIMESTAMP >= "2017-01-06 00:00:00" group by DEVICE having count(*) > 800
          • +
          • set <name> sqlCmd select DEVICE, count(*) from history where TIMESTAMP >= "2017-05-06 00:00:00" group by DEVICE
          • +
          • set <name> sqlCmd select DEVICE, count(*) from history where TIMESTAMP >= §timestamp_begin§ group by DEVICE
          • +
          • set <name> sqlCmd select * from history where DEVICE like "Te%t" order by `TIMESTAMP` desc
          • +
          • set <name> sqlCmd select * from history where `TIMESTAMP` > "2017-05-09 18:03:00" order by `TIMESTAMP` desc
          • +
          • set <name> sqlCmd select * from current order by `TIMESTAMP` desc
          • +
          • set <name> sqlCmd select sum(VALUE) as 'Einspeisung am 04.05.2017', count(*) as 'Anzahl' FROM history where `READING` = "Einspeisung_WirkP_Zaehler_Diff" and TIMESTAMP between '2017-05-04' AND '2017-05-05'
          • +
          • set <name> sqlCmd delete from current
          • +
          • set <name> sqlCmd delete from history where TIMESTAMP < "2016-05-06 00:00:00"
          • +
          • set <name> sqlCmd update history set TIMESTAMP=TIMESTAMP,VALUE='Val' WHERE VALUE='TestValue'
          • +
          • set <name> sqlCmd select * from history where DEVICE = "Test"
          • +
          • set <name> sqlCmd insert into history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES ('2017-05-09 17:00:14','Test','manuell','manuell','Tes§e','TestValue','°C')
          • +
          +
          + + The result of the statement will be shown in Reading "SqlResult". + The formatting of result can be choosen by attribute "sqlResultFormat", as well as the used + field separator can be determined by attribute "sqlResultFieldSep".

          + + The module provides a command history once a sqlCmd command was executed successfully. + To use this option, activate the attribute "sqlCmdHistoryLength" with list lenght you want.

          + + For a better overview the relevant attributes for sqlCmd are listed in a table:

          + +
            + + + + + + +
            allowDeletion : activates capabilty to delete datasets
            sqlResultFormat : determines presentation style of command result
            sqlResultFieldSep : choice of a useful field separator for result
            sqlCmdHistoryLength : activates command history and length
            +
          +
          +
          + + Note:
          + Even though the module works non-blocking regarding to database operations, a huge + sample space (number of rows/readings) could block the browser session respectively + FHEMWEB. + If you are unsure about the result of the statement, you should preventively add a limit to + the statement.

          +
          +
        + +
      • sqlCmdHistory - If history is activated by attribute "sqlCmdHistoryLength", an already + successfully executed sqlCmd-command can be repeated from a drop-down list.
        + By execution of the last list entry, "__purge_historylist__", the list itself can be deleted.
        + If the statement contains "," this character is displayed as "<c>" in the history + list due to technical restrictions.
        +

      • + +
      • sqlSpecial - This function provides a drop-down list with a selection of prepared reportings.
        + The statements result is depicted in reading "SqlResult". + The result can be formatted by attribute "sqlResultFormat", + a well as the used field separator by attribute "sqlResultFieldSep". +

        + + The relevant attributes for this function are:

        +
          + + + + +
          sqlResultFormat : determines the formatting of the result
          sqlResultFieldSep : determines the used field separator in statement result
          +
        +
        + + The following predefined reportings are selectable:

        +
          + + + + + +
          50mostFreqLogsLast2days : reports the 50 most occuring log entries of the last 2 days
          allDevCount : all devices occuring in database and their quantity
          allDevReadCount : all device/reading combinations occuring in database and their quantity
          +
        + +


      • + +
      • sumValue [display | writeToDB] + - calculates the summary of database column "VALUE" between period given by + attributes "timestamp_begin", "timestamp_end" or + "timeDiffToNow / timeOlderThan". The reading to evaluate must be defined using attribute + "reading". Using this function is mostly reasonable if value-differences of readings + are written to the database.
        + + Is no or the option "display" specified, the results are only displayed. Using + option "writeToDB" the calculation results are stored in the database with a new reading + name.
        + The new readingname is built of a prefix and the original reading name, + in which the original reading name can be replaced by the value of attribute "readingNameMap". + The prefix is made up of the creation function and the aggregation.
        + The timestamp of the new stored readings is deviated from aggregation period, + unless no unique point of time of the result can be determined. + The field "EVENT" will be filled with "calculated".

        + +
          + Example of building a new reading name from the original reading "totalpac":
          + sum_day_totalpac
          + # <creation function>_<aggregation>_<original reading>
          +
          +
        +
        + +
      • syncStandby <DbLog-Device Standby> + - datasets of the connected database (source) are transmitted into another database + (Standby-database).
        + Here the "<DbLog-Device Standby>" is the DbLog-Device what is connected to the + Standby-database.

        + All the datasets which are determined by timestamp-attributes + or respectively the attributes "device", "reading" are transmitted.
        + The datasets are transmitted in time slices accordingly to the adjusted aggregation. + If the attribute "aggregation" has value "no" or "month", the datasets are transmitted + automatically in daily time slices into standby-database. + Source- and Standby-database can be of different types. +

        + + The relevant attributes to control the syncStandby function are:

        + +
          + + + + + + +
          aggregation : adjustment of time slices for data transmission (hour,day,week)
          device : transmit only datasets which are contain <device>
          reading : transmit only datasets which are contain <reading>
          time.* : A number of attributes to limit selection by time
          +
        +
        +
        +

      • + +
      • tableCurrentFillup - the current-table will be filled u with an extract of the history-table. + The attributes for limiting time and device, reading are considered. + Thereby the content of the extract can be affected. In the associated DbLog-device the attribute "DbLogType" should be set to + "SampleFill/History".

      • + +
      • tableCurrentPurge - deletes the content of current-table. There are no limits, e.g. by attributes "timestamp_begin", "timestamp_end", device, reading + and so on, considered.

      • + +
      • vacuum - optimize tables in the connected database (SQLite, PostgreSQL).
        + Before and after an optimization it is possible to execute a FHEM command. + (please see attributes "executeBeforeProc", "executeAfterProc") +

        + +
          + Note:
          + Even though the function itself is designed non-blocking, make sure the assigned DbLog-device + is operating in asynchronous mode to avoid FHEM from blocking.

          + +

        + +
        +
    + + For all evaluation variants (except sqlCmd,deviceRename,readingRename) applies:
    + In addition to the needed reading the device can be complemented to restrict the datasets for reporting / function. + If no time limit attribute is set but aggregation is set, the period from the oldest dataset in database to the current + date/time will be used as selection criterion. If the oldest dataset wasn't identified, then '1970-01-01 01:00:00' is used + as start date (see get <name> "minTimestamp" also). + If both time limit attribute and aggregation isn't set, the selection on database is runnung without timestamp criterion. +

    + + Note:
    + + If you are in detail view it could be necessary to refresh the browser to see the result of operation as soon in DeviceOverview section "state = done" will be shown. + +

    + +
+ + +Get +
    + + The get-commands of DbRep provide to retrieve some metadata of the used database instance. + Those are for example adjusted server parameter, server variables, datadasestatus- and table informations. THe available get-functions depending of + the used database type. So for SQLite curently only "get svrinfo" is usable. The functions nativ are delivering a lot of outpit values. + They can be limited by function specific attributes. The filter has to be setup by a comma separated list. + SQL-Wildcard (%) can be used to setup the list arguments. +

    + + Note:
    + After executing a get-funktion in detail view please make a browser refresh to see the results ! +

    + +
        +
      • blockinginfo - list the current system wide running background processes (BlockingCalls) together with their informations. + If character string is too long (e.g. arguments) it is reported shortened. +
      • +

        + +
      • dbstatus - lists global informations about MySQL server status (e.g. informations related to cache, threads, bufferpools, etc. ). + Initially all available informations are reported. Using the attribute "showStatus" the quantity of + results can be limited to show only the desired values. Further detailed informations of items meaning are + explained there.
        + +
          + Example
          + get <name> dbstatus
          + attr <name> showStatus %uptime%,%qcache%
          + # Only readings containing "uptime" and "qcache" in name will be created + +

          +
        + +
      • dbValue <SQL-statement> - + Executes the specified SQL-statement in blocking manner. Because of its mode of operation + this function is particular convenient for user own perl scripts.
        + The input accepts multi line commands and delivers multi line results as well. + If several fields are selected and passed back, the fieds are separated by the separator defined + by attribute "sqlResultFieldSep" (default "|"). Several result lines + are separated by newline ("\n").
        + This function only set/update status readings, the userExitFn function isn't called. +
        + +
          + Examples for use in FHEMWEB
          + {fhem("get <name> dbValue select device,count(*) from history where timestamp > '2018-04-01' group by device")}
          + get <name> dbValue select device,count(*) from history where timestamp > '2018-04-01' group by device
          + {CommandGet(undef,"Rep.LogDB1 dbValue select device,count(*) from history where timestamp > '2018-04-01' group by device")}
          +
        + +

        + If you create a little routine in 99_myUtils, for example: +
        +
        +sub dbval($$) {
        +  my ($name,$cmd) = @_;
        +  my $ret = CommandGet(undef,"$name dbValue $cmd"); 
        +return $ret;
        +}                            
        +                            
        + it can be accessed with e.g. those calls: +

        + +
          + Examples:
          + {dbval("<name>","select count(*) from history")}
          + $ret = dbval("<name>","select count(*) from history");
          +
        + +
      • +

        + +
      • dbvars - lists global informations about MySQL system variables. Included are e.g. readings related to InnoDB-Home, datafile path, + memory- or cache-parameter and so on. The Output reports initially all available informations. Using the + attribute "showVariables" the quantity of results can be limited to show only the desired values. + Further detailed informations of items meaning are explained + there.
        + +
          + Example
          + get <name> dbvars
          + attr <name> showVariables %version%,%query_cache%
          + # Only readings containing "version" and "query_cache" in name will be created + +

          +
        + +
      • minTimestamp - Identifies the oldest timestamp in the database (will be executed implicitely at FHEM start). + The timestamp is used as begin of data selection if no time attribut is set to determine the + start date. +
      • +

        + +
      • procinfo - reports the existing database processes in a summary table (only MySQL).
        + Typically only the own processes of the connection user (set in DbLog configuration file) will be + reported. If all precesses have to be reported, the global "PROCESS" right has to be granted to the + user.
        + As of MariaDB 5.3 for particular SQL-Statements a progress reporting will be provided + (table row "PROGRESS"). So you can track, for instance, the degree of processing during an index + creation.
        + Further informations can be found + there.
        +
      • +

        + +
      • svrinfo - common database server informations, e.g. DBMS-version, server address and port and so on. The quantity of elements to get depends + on the database type. Using the attribute "showSvrInfo" the quantity of results can be limited to show only + the desired values. Further detailed informations of items meaning are explained + there.
        + +
          + Example
          + get <name> svrinfo
          + attr <name> showSvrInfo %SQL_CATALOG_TERM%,%NAME%
          + # Only readings containing "SQL_CATALOG_TERM" and "NAME" in name will be created + +

          +
        + +
      • tableinfo - access detailed informations about tables in MySQL database which is connected by the DbRep-device. + All available tables in the connected database will be selected by default. + Using theattribute "showTableInfo" the results can be limited to tables you want to show. + Further detailed informations of items meaning are explained there.
        + +
          + Example
          + get <name> tableinfo
          + attr <name> showTableInfo current,history
          + # Only informations related to tables "current" and "history" are going to be created + +

          +
        +
        +
    + +
+ + + +Attributes + +
+
    + Using the module specific attributes you are able to define the scope of evaluation and the aggregation.

    + + Note for SQL-Wildcard Usage:
    + Within the attribute values of "device" and "reading" you may use SQL-Wildcard "%", Character "_" is not supported as a wildcard. + The character "%" stands for any characters.
    + This rule is valid to all functions except "insert", "importFromFile" and "deviceRename".
    + The function "insert" doesn't allow setting the mentioned attributes containing the wildcard "%".
    + In readings the wildcard character "%" will be replaced by "/" to meet the rules of allowed characters in readings. +

    + +
        + +
      • aggregation - Aggregation of Device/Reading-selections. Possible is hour, day, week, month or "no". + Delivers e.g. the count of database entries for a day (countEntries), Summation of + difference values of a reading (sumValue) and so on. Using aggregation "no" (default) an + aggregation don't happens but the output contaims all values of Device/Reading in the defined time period.

      • + + +
      • allowDeletion - unlocks the delete-function

      • + + +
      • averageCalcForm - specifies the calculation variant of average peak by "averageValue".

        + + At the moment the following methods are implemented:

        + +
          + + + + + +
          avgArithmeticMean : the arithmetic average is calculated (default)
          avgDailyMeanGWS : calculates the daily medium temperature according the + specifications of german weather service (pls. see helpful hints by get versionNotes).
          + This variant uses aggregation "day" automatically.
          avgTimeWeightMean : calculates a time weighted average mean value is calculated
          +
        +

      • + + +
      • device - Selection of a particular device.
        + You can specify device specifications (devspec).
        + Inside of device specifications a SQL wildcard (%) will be evaluated as a normal ASCII-character. + The device names are derived from device specification and the active devices in FHEM before + SQL selection will be carried out.

      • + +
          + Examples:
          + attr <name> device TYPE=DbRep
          + # select datasets of active present devices with Type "DbRep"
          + attr <name> device MySTP_5000
          + # select datasets of device "MySTP_5000"
          + attr <name> device SMA.*
          + # select datasets of devices starting with "SMA"
          + attr <name> device SMA_Energymeter,MySTP_5000
          + # select datasets of devices "SMA_Energymeter" and "MySTP_5000"
          + attr <name> device %5000
          + # select datasets of devices ending with "5000"
          +
        +
        + + Please see also device specifications (devspec). +

        + + +
      • diffAccept - valid for function diffValue. diffAccept determines the threshold, up to that a calaculated + difference between two straight sequently datasets should be commenly accepted + (default = 20).
        + Hence faulty DB entries with a disproportional high difference value will be eliminated and + don't tamper the result. + If a threshold overrun happens, the reading "diff_overrun_limit_<diffLimit>" will be + generated (<diffLimit> will be substituted with the present prest attribute value).
        + The reading contains a list of relevant pair of values. Using verbose=3 this list will also + be reported in the FHEM logfile. +

      • + +
          + Example report in logfile if threshold of diffAccept=10 overruns:

          + + DbRep Rep.STP5000.etotal -> data ignored while calc diffValue due to threshold overrun (diffAccept = 10):
          + 2016-04-09 08:50:50 0.0340 -> 2016-04-09 12:42:01 13.3440

          + + # The first dataset with a value of 0.0340 is untypical low compared to the next value of 13.3440 and results a untypical + high difference value.
          + # Now you have to decide if the (second) dataset should be deleted, ignored of the attribute diffAccept should be adjusted. +

        + + + +
      • disable - deactivates the module

      • + + +
      • dumpComment - User-comment. It will be included in the header of the created dumpfile by + command "dumpMySQL clientSide".

      • + + +
      • dumpCompress - if set, the dump files are compressed after operation of "dumpMySQL" bzw. "dumpSQLite"

      • + + +
      • dumpDirLocal - Target directory of database dumps by command "dumpMySQL clientSide" + (default: "{global}{modpath}/log/" on the FHEM-Server).
        + In this directory also the internal version administration searches for old backup-files + and deletes them if the number exceeds attribute "dumpFilesKeep". + The attribute is also relevant to publish a local mounted directory "dumpDirRemote" to + DbRep.

      • + + +
      • dumpDirRemote - Target directory of database dumps by command "dumpMySQL serverSide" + (default: the Home-directory of MySQL-Server on the MySQL-Host).

      • + + +
      • dumpMemlimit - tolerable memory consumption for the SQL-script during generation period (default: 100000 characters). + Please adjust this parameter if you may notice memory bottlenecks and performance problems based + on it on your specific hardware.

      • + + +
      • dumpSpeed - Number of Lines which will be selected in source database with one select by dump-command + "dumpMySQL ClientSide" (default: 10000). + This parameter impacts the run-time and consumption of resources directly.

      • + + +
      • dumpFilesKeep - The specified number of dumpfiles remain in the dump directory (default: 3). + If there more (older) files has been found, these files will be deleted after a new database dump + was created successfully. + The global attrubute "archivesort" will be considered.

      • + + +
      • executeAfterProc - you can specify a FHEM command or perl function which should be executed + after command execution.
        + Perl functions have to be enclosed in {} .

        + +
          + Example:

          + attr <name> executeAfterProc set og_gz_westfenster off;
          + attr <name> executeAfterProc {adump ("<name>")}

          + + # "adump" is a function defined in 99_myUtils.pm e.g.:
          + +
          +sub adump {
          +    my ($name) = @_;
          +    my $hash = $defs{$name};
          +    # own function, e.g.
          +    Log3($name, 3, "DbRep $name -> Dump finished");
          + 
          +    return;
          +}
          +
          +
        +
      • + + +
      • executeBeforeProc - you can specify a FHEM command or perl function which should be executed + before command execution.
        + Perl functions have to be enclosed in {} .

        + +
          + Example:

          + attr <name> executeBeforeProc set og_gz_westfenster on;
          + attr <name> executeBeforeProc {bdump ("<name>")}

          + + # "bdump" is a function defined in 99_myUtils.pm e.g.:
          + +
          +sub bdump {
          +    my ($name) = @_;
          +    my $hash = $defs{$name};
          +    # own function, e.g.
          +    Log3($name, 3, "DbRep $name -> Dump starts now");
          + 
          +    return;
          +}
          +
          +
        +
      • + + +
      • expimpfile - Path/filename for data export/import.

        + + The filename may contain wildcards which are replaced by corresponding values + (see subsequent table). + Furthermore filename can contain %-wildcards of the POSIX strftime function of the underlying OS (see your + strftime manual).
        + +
          + + + + + + + + + + + + + +
          %L : is replaced by the value of global logdir attribute
          %TSB : is replaced by the (calculated) value of the timestamp_begin attribute
          Common used POSIX-wildcards are:
          %d : day of month (01..31)
          %m : month (01..12)
          %Y : year (1970...)
          %w : day of week (0..6); 0 represents Sunday
          %j : day of year (001..366)
          %U : week number of year with Sunday as first day of week (00..53)
          %W : week number of year with Monday as first day of week (00..53)
          +
        +

      • + +
          + Examples:
          + attr <name> expimpfile /sds1/backup/exptest_%TSB.csv
          + attr <name> expimpfile /sds1/backup/exptest_%Y-%m-%d.csv
          +
        +
        + + + About POSIX wildcard usage please see also explanations in + Filelog.
        +

        + + +
      • fetchMarkDuplicates + - Highlighting of multiple occuring datasets in result of "fetchrows" command

      • + + +
      • fetchRoute [descent | ascent] - specify the direction of data selection of the fetchrows-command.

        +
          + descent - the data are read descent (default). If + amount of datasets specified by attribut "limit" is exceeded, + the newest x datasets are shown.

          + ascent - the data are read ascent . If + amount of datasets specified by attribut "limit" is exceeded, + the oldest x datasets are shown.
          +
        + +


      • + + +
      • ftpUse - FTP Transfer after dump will be switched on (without SSL encoding). The created + database backup file will be transfered non-blocking to the FTP-Server (Attribut "ftpServer"). +

      • + + +
      • ftpUseSSL - FTP Transfer with SSL encoding after dump. The created database backup file will be transfered + non-blocking to the FTP-Server (Attribut "ftpServer").

      • + + +
      • ftpUser - User for FTP-server login, default: "anonymous".

      • + + +
      • ftpDebug - debugging of FTP communication for diagnostics.

      • + + +
      • ftpDir - directory on FTP-server in which the file will be send into (default: "/").

      • + + +
      • ftpDumpFilesKeep - leave the number of dump files in FTP-destination <ftpDir> (default: 3). Are there more + (older) dump files present, these files are deleted after a new dump was transfered successfully.

      • + + +
      • ftpPassive - set if passive FTP is to be used

      • + + +
      • ftpPort - FTP-Port, default: 21

      • + + +
      • ftpPwd - password of FTP-User, is not set by default

      • + + +
      • ftpServer - name or IP-address of FTP-server. absolutely essential !

      • + + +
      • ftpTimeout - timeout of FTP-connection in seconds (default: 30).

      • + + +
      • limit - limits the number of selected datasets by the "fetchrows", or the shown datasets of "delSeqDoublets adviceDelete", + "delSeqDoublets adviceRemain" commands (default: 1000). + This limitation should prevent the browser session from overload and + avoids FHEMWEB from blocking. Please change the attribut according your requirements or change the + selection criteria (decrease evaluation period).

      • + + +
      • optimizeTablesBeforeDump - if set to "1", the database tables will be optimized before executing the dump + (default: 0). + Thereby the backup run-time time will be extended.

        +
          + Note
          + The table optimizing cause locking the tables and therefore to blocking of + FHEM if DbLog isn't working in asynchronous mode (DbLog-attribute "asyncMode") ! +
          +
        +

      • + + +
      • reading - Selection of a particular reading. + More than one reading are specified as a comma separated list.
        + If SQL wildcard (%) is set in a list, it will be evaluated as a normal ASCII-character.
        +

      • + +
          + Examples:
          + attr <name> reading etotal
          + attr <name> reading et%
          + attr <name> reading etotal,etoday
          +
        +

        + + +
      • readingNameMap - the name of the analyzed reading can be overwritten for output

      • + + +
      • role - the role of the DbRep-device. Standard role is "Client".
        + + + The role "Agent" is described in section DbRep-Agent. + +

      • + + +
      • readingPreventFromDel - comma separated list of readings which are should prevent from deletion when a + new operation starts

      • + + +
      • seqDoubletsVariance - accepted variance (+/-) for the command "set <name> delSeqDoublets".
        + The value of this attribute describes the variance up to it consecutive numeric values (VALUE) of + datasets are handled as identical and should be deleted. "seqDoubletsVariance" is an absolut numerical value, + which is used as a positive as well as a negative variance.

      • + +
          + Examples:
          + attr <name> seqDoubletsVariance 0.0014
          + attr <name> seqDoubletsVariance 1.45
          +
        +

        + + +
      • showproctime - if set, the reading "sql_processing_time" shows the required execution time (in seconds) + for the sql-requests. This is not calculated for a single sql-statement, but the summary + of all sql-statements necessara for within an executed DbRep-function in background.

      • + + +
      • showStatus - limits the sample space of command "get <name> dbstatus". SQL-Wildcard (%) can be used.

      • + +
          + Example:
          + attr <name> showStatus %uptime%,%qcache%
          + # Only readings with containing "uptime" and "qcache" in name will be shown
          +

        + + +
      • showVariables - limits the sample space of command "get <name> dbvars". SQL-Wildcard (%) can be used.

      • + +
          + Example:
          + attr <name> showVariables %version%,%query_cache%
          + # Only readings with containing "version" and "query_cache" in name will be shown
          +

        + + +
      • showSvrInfo - limits the sample space of command "get <name> svrinfo". SQL-Wildcard (%) can be used.

      • + +
          + Example:
          + attr <name> showSvrInfo %SQL_CATALOG_TERM%,%NAME%
          + # Only readings with containing "SQL_CATALOG_TERM" and "NAME" in name will be shown
          +

        + + +
      • showTableInfo - limits the tablename which is selected by command "get <name> tableinfo". SQL-Wildcard + (%) can be used.

      • + +
          + Example:
          + attr <name> showTableInfo current,history
          + # Only informations about tables "current" and "history" will be shown
          +

        + + +
      • sqlCmdHistoryLength + - activates the command history of "sqlCmd" and determines the length of it

      • + + +
      • sqlResultFieldSep - determines the used field separator (default: "|") in the result of some sql-commands.

      • + + +
      • sqlResultFormat - determines the formatting of the "set <name> sqlCmd" command result. + Possible options are:

        +
          + separated - every line of the result will be generated sequentially in a single + reading. (default)

          + mline - the result will be generated as multiline in + Reading SqlResult. +

          + sline - the result will be generated as singleline in + Reading SqlResult. + Datasets are separated by "]|[".

          + table - the result will be generated as an table in + Reading SqlResult.

          + json - creates the Reading SqlResult as a JSON + coded hash. + Every hash-element consists of the serial number of the dataset (key) + and its value.

          + + + To process the result, you may use a userExitFn in 99_myUtils for example:
          +
          +        sub resfromjson {
          +          my ($name,$reading,$value) = @_;
          +          my $hash   = $defs{$name};
          +
          +          if ($reading eq "SqlResult") {
          +            # only reading SqlResult contains JSON encoded data
          +            my $data = decode_json($value);
          +	      
          +		    foreach my $k (keys(%$data)) {
          +		      
          +			  # use your own processing from here for every hash-element 
          +		      # e.g. output of every element that contains "Cam"
          +		      my $ke = $data->{$k};
          +		      if($ke =~ m/Cam/i) {
          +		        my ($res1,$res2) = split("\\|", $ke);
          +                Log3($name, 1, "$name - extract element $k by userExitFn: ".$res1." ".$res2);
          +		      }
          +	        }
          +          }
          +        return;
          +        }
          +  	    
          +
        +
        + + +
      • timeYearPeriod - By this attribute an annual time period will be determined for database data selection. + The time limits are calculated dynamically during execution time. Every time an annual period is determined. + Periods of less than a year are not possible to set.
        + This attribute is particularly intended to make reports synchronous to an account period, e.g. of an energy- or gas provider. +

      • + +
          + Example:

          + attr <name> timeYearPeriod 06-25 06-24

          + + # evaluates the database within the time limits 25. june AAAA and 24. june BBBB.
          + # The year AAAA respectively year BBBB is calculated dynamically depending of the current date.
          + # If the current date >= 25. june and =< 31. december, than AAAA = current year and BBBB = current year+1
          + # If the current date >= 01. january und =< 24. june, than AAAA = current year-1 and BBBB = current year +
        +

        + + +
      • timestamp_begin - begin of data selection

      • + + The format of timestamp is as used with DbLog "YYYY-MM-DD HH:MM:SS". For the attributes "timestamp_begin", "timestamp_end" + you can also use one of the following entries. The timestamp-attribute will be dynamically set to:

        +
          + current_year_begin : matches "<current year>-01-01 00:00:00"
          + current_year_end : matches "<current year>-12-31 23:59:59"
          + previous_year_begin : matches "<previous year>-01-01 00:00:00"
          + previous_year_end : matches "<previous year>-12-31 23:59:59"
          + current_month_begin : matches "<current month first day> 00:00:00"
          + current_month_end : matches "<current month last day> 23:59:59"
          + previous_month_begin : matches "<previous month first day> 00:00:00"
          + previous_month_end : matches "<previous month last day> 23:59:59"
          + current_week_begin : matches "<first day of current week> 00:00:00"
          + current_week_end : matches "<last day of current week> 23:59:59"
          + previous_week_begin : matches "<first day of previous week> 00:00:00"
          + previous_week_end : matches "<last day of previous week> 23:59:59"
          + current_day_begin : matches "<current day> 00:00:00"
          + current_day_end : matches "<current day> 23:59:59"
          + previous_day_begin : matches "<previous day> 00:00:00"
          + previous_day_end : matches "<previous day> 23:59:59"
          + current_hour_begin : matches "<current hour>:00:00"
          + current_hour_end : matches "<current hour>:59:59"
          + previous_hour_begin : matches "<previous hour>:00:00"
          + previous_hour_end : matches "<previous hour>:59:59"
          +
        +

        + + +
      • timestamp_end - end of data selection. If not set the current date/time combination will be used.

      • + + The format of timestamp is as used with DbLog "YYYY-MM-DD HH:MM:SS". For the attributes "timestamp_begin", "timestamp_end" + you can also use one of the following entries. The timestamp-attribute will be dynamically set to:

        +
          + current_year_begin : matches "<current year>-01-01 00:00:00"
          + current_year_end : matches "<current year>-12-31 23:59:59"
          + previous_year_begin : matches "<previous year>-01-01 00:00:00"
          + previous_year_end : matches "<previous year>-12-31 23:59:59"
          + current_month_begin : matches "<current month first day> 00:00:00"
          + current_month_end : matches "<current month last day> 23:59:59"
          + previous_month_begin : matches "<previous month first day> 00:00:00"
          + previous_month_end : matches "<previous month last day> 23:59:59"
          + current_week_begin : matches "<first day of current week> 00:00:00"
          + current_week_end : matches "<last day of current week> 23:59:59"
          + previous_week_begin : matches "<first day of previous week> 00:00:00"
          + previous_week_end : matches "<last day of previous week> 23:59:59"
          + current_day_begin : matches "<current day> 00:00:00"
          + current_day_end : matches "<current day> 23:59:59"
          + previous_day_begin : matches "<previous day> 00:00:00"
          + previous_day_end : matches "<previous day> 23:59:59"
          + current_hour_begin : matches "<current hour>:00:00"
          + current_hour_end : matches "<current hour>:59:59"
          + previous_hour_begin : matches "<previous hour>:00:00"
          + previous_hour_end : matches "<previous hour>:59:59"

        + + Make sure that "timestamp_begin" < "timestamp_end" is fulfilled.

        + +
          + Example:

          + attr <name> timestamp_begin current_year_begin
          + attr <name> timestamp_end current_year_end

          + + # Analyzes the database between the time limits of the current year.
          +
        +

        + + Note
        + + If the attribute "timeDiffToNow" will be set, the attributes "timestamp_begin" respectively "timestamp_end" will be deleted if they were set before. + The setting of "timestamp_begin" respectively "timestamp_end" causes the deletion of attribute "timeDiffToNow" if it was set before as well. +

        + + +
      • timeDiffToNow - the begin time of data selection will be set to the timestamp "<current time> - + <timeDiffToNow>" dynamically (e.g. if set to 86400, the last 24 hours are considered by data + selection). The time period will be calculated dynamically at execution time.

      • + +
          + Examples for input format:
          + attr <name> timeDiffToNow 86400
          + # the start time is set to "current time - 86400 seconds"
          + attr <name> timeDiffToNow d:2 h:3 m:2 s:10
          + # the start time is set to "current time - 2 days 3 hours 2 minutes 10 seconds"
          + attr <name> timeDiffToNow m:600
          + # the start time is set to "current time - 600 minutes" gesetzt
          + attr <name> timeDiffToNow h:2.5
          + # the start time is set to "current time - 2,5 hours"
          + attr <name> timeDiffToNow y:1 h:2.5
          + # the start time is set to "current time - 1 year and 2,5 hours"
          + attr <name> timeDiffToNow y:1.5
          + # the start time is set to "current time - 1.5 years"
          +
        +
        + + If both attributes "timeDiffToNow" and "timeOlderThan" are set, the selection + period will be calculated between of these timestamps dynamically. +

        + + +
      • timeOlderThan - the end time of data selection will be set to the timestamp "<aktuelle Zeit> - + <timeOlderThan>" dynamically. Always the datasets up to timestamp + "<current time> - <timeOlderThan>" will be considered (e.g. if set to + 86400, all datasets older than one day are considered). The time period will be calculated dynamically at + execution time.

      • + +
          + Examples for input format:
          + attr <name> timeOlderThan 86400
          + # the selection end time is set to "current time - 86400 seconds"
          + attr <name> timeOlderThan d:2 h:3 m:2 s:10
          + # the selection end time is set to "current time - 2 days 3 hours 2 minutes 10 seconds"
          + attr <name> timeOlderThan m:600
          + # the selection end time is set to "current time - 600 minutes" gesetzt
          + attr <name> timeOlderThan h:2.5
          + # the selection end time is set to "current time - 2,5 hours"
          + attr <name> timeOlderThan y:1 h:2.5
          + # the selection end time is set to "current time - 1 year and 2,5 hours"
          + attr <name> timeOlderThan y:1.5
          + # the selection end time is set to "current time - 1.5 years"
          +
        +
        + + If both attributes "timeDiffToNow" and "timeOlderThan" are set, the selection + period will be calculated between of these timestamps dynamically. +

        + + +
      • timeout - set the timeout-value for Blocking-Call Routines in background in seconds (default 86400)

      • + + +
      • userExitFn - provides an interface to execute user specific program code.
        + To activate the interfaace at first you should implement the subroutine which will be + called by the interface in your 99_myUtls.pm as shown in by the example:
        + +
        +        sub UserFunction {
        +          my ($name,$reading,$value) = @_;
        +          my $hash = $defs{$name};
        +          ...
        +          # e.g. output transfered data
        +          Log3 $name, 1, "UserExitFn $name called - transfer parameter are Reading: $reading, Value: $value " ;
        +          ...
        +        return;
        +        }
        +  	    
        + The interface activation takes place by setting the subroutine name into the attribute. + Optional you may set a Reading:Value combination (Regex) as argument. If no Regex is + specified, all value combinations will be evaluated as "true" (related to .*:.*). +

        + +
          + Example:
          + attr userExitFn UserFunction .*:.*
          + # "UserFunction" is the name of subroutine in 99_myUtils.pm. +
        +
        + + The interface works generally without and independent from Events. + If the attribute is set, after every reading generation the Regex will be evaluated. + If the evaluation was "true", set subroutine will be called. + For further processing following parameters will be forwarded to the function:

        + +
          +
        • $name - the name of the DbRep-Device
        • +
        • $reading - the name of the created reading
        • +
        • $value - the value of the reading
        • + +
        +
      • +

        + + +
      • valueFilter - Regular expression to filter datasets within particular functions. The regex is + applied to the whole selected dataset (inclusive Device, Reading and so on). + Please compare to explanations of relevant set-commands.

      • + +
      +
+ + +Readings + +
+
    + Regarding to the selected operation the reasults will be shown as readings. At the beginning of a new operation all old readings will be deleted to avoid + that unsuitable or invalid readings would remain.

    + + In addition the following readings will be created:

    + +
        +
      • state - contains the current state of evaluation. If warnings are occured (state = Warning) compare Readings + "diff_overrun_limit_<diffLimit>" and "less_data_in_period"

      • + +
      • errortext - description about the reason of an error state

      • + +
      • background_processing_time - the processing time spent for operations in background/forked operation

      • + +
      • sql_processing_time - the processing time wasted for all sql-statements used for an operation

      • + +
      • diff_overrun_limit_<diffLimit> - contains a list of pairs of datasets which have overrun the threshold (<diffLimit>) + of calculated difference each other determined by attribute "diffAccept" (default=20).

      • + +
      • less_data_in_period - contains a list of time periods within only one dataset was found. The difference calculation considers + the last value of the aggregation period before the current one. Valid for function "diffValue".

      • + +
      • SqlResult - result of the last executed sqlCmd-command. The formatting can be specified + by attribute "sqlResultFormat"

      • + +
      • sqlCmd - contains the last executed sqlCmd-command

      • + +
    +

    + +
+ + +DbRep Agent - automatic change of device names in databases and DbRep-definitions after FHEM "rename" command + +
+
    + By the attribute "role" the role of DbRep-device will be configured. The standard role is "Client". If the role has changed to "Agent", the DbRep device + react automatically on renaming devices in your FHEM installation. The DbRep device is now called DbRep-Agent.

    + + By the DbRep-Agent the following features are activated when a FHEM-device has being renamed:

    + +
        +
      • in the database connected to the DbRep-Agent (Internal Database) dataset containing the old device name will be searched and renamed to the + to the new device name in all affected datasets.

      • + +
      • in the DbLog-Device assigned to the DbRep-Agent the definition will be changed to substitute the old device name by the new one. Thereby the logging of + the renamed device will be going on in the database.

      • + +
      • in other existing DbRep-definitions with Type "Client" a possibly set attribute "device = old device name" will be changed to "device = new device name". + Because of that, reporting definitions will be kept consistent automatically if devices are renamed in FHEM.

      • + +
    + + The following restrictions take place if a DbRep device was changed to an Agent by setting attribute "role" to "Agent". These conditions will be activated + and checked:

    + +
        +
      • within a FHEM installation only one DbRep-Agent can be configured for every defined DbLog-database. That means, if more than one DbLog-database is present, + you could define same numbers of DbRep-Agents as well as DbLog-devices are defined.

      • + +
      • after changing to DbRep-Agent role only the set-command "renameDevice" will be available and as well as a reduced set of module specific attributes will be + permitted. If a DbRep-device of privious type "Client" has changed an Agent, furthermore not permitted attributes will be deleted if set.

      • + +
    + + All activities like database changes and changes of other DbRep-definitions will be logged in FHEM Logfile with verbose=3. In order that the renameDevice + function don't running into timeout set the timeout attribute to an appropriate value, especially if there are databases with huge datasets to evaluate. + As well as all the other database operations of this module, the autorename operation will be executed nonblocking.

    + +
      + Example of definition of a DbRep-device as an Agent:

      + + define Rep.Agent DbRep LogDB
      + attr Rep.Agent devStateIcon connected:10px-kreis-gelb .*disconnect:10px-kreis-rot .*done:10px-kreis-gruen
      + attr Rep.Agent icon security
      + attr Rep.Agent role Agent
      + attr Rep.Agent room DbLog
      + attr Rep.Agent showproctime 1
      + attr Rep.Agent stateFormat { ReadingsVal("$name","state", undef) eq "running" ? "renaming" : ReadingsVal("$name","state", undef). " »; ProcTime: ".ReadingsVal("$name","sql_processing_time", undef)." sec"}
      + attr Rep.Agent timeout 86400
      +
      +
      +
    + + Note:
    + Even though the function itself is designed non-blocking, make sure the assigned DbLog-device + is operating in asynchronous mode to avoid FHEMWEB from blocking.

    + +
+ +=end html +=begin html_DE + + +

DbRep

+
    +
    + Zweck des Moduls ist es, den Inhalt von DbLog-Datenbanken nach bestimmten Kriterien zu durchsuchen, zu managen, das Ergebnis hinsichtlich verschiedener + Aggregationen auszuwerten und als Readings darzustellen. Die Abgrenzung der zu berücksichtigenden Datenbankinhalte erfolgt durch die Angabe von Device, Reading und + die Zeitgrenzen für Auswertungsbeginn bzw. Auswertungsende.

    + + Fast alle Datenbankoperationen werden nichtblockierend ausgeführt. Auf Ausnahmen wird hingewiesen. + Die Ausführungszeit der (SQL)-Hintergrundoperationen kann optional ebenfalls als Reading bereitgestellt + werden (siehe Attribute).
    + Alle vorhandenen Readings werden vor einer neuen Operation gelöscht. Durch das Attribut "readingPreventFromDel" kann eine Komma separierte Liste von Readings + angegeben werden die nicht gelöscht werden sollen.

    + + Aktuell werden folgende Operationen unterstützt:

    + +
        +
      • Selektion aller Datensätze innerhalb einstellbarer Zeitgrenzen
      • +
      • Darstellung der Datensätze einer Device/Reading-Kombination innerhalb einstellbarer Zeitgrenzen.
      • +
      • Selektion der Datensätze unter Verwendung von dynamisch berechneter Zeitgrenzen zum Ausführungszeitpunkt.
      • +
      • Dubletten-Hervorhebung bei Datensatzanzeige (fetchrows)
      • +
      • Berechnung der Anzahl von Datensätzen einer Device/Reading-Kombination unter Berücksichtigung von Zeitgrenzen + und verschiedenen Aggregationen.
      • +
      • Die Berechnung von Summen-, Differenz-, Maximum-, Minimum- und Durchschnittswerten numerischer Readings + in Zeitgrenzen und verschiedenen Aggregationen.
      • +
      • Speichern von Summen-, Differenz- , Maximum- , Minimum- und Durchschnittswertberechnungen in der Datenbank
      • +
      • Löschung von Datensätzen. Die Eingrenzung der Löschung kann durch Device und/oder Reading sowie fixer oder + dynamisch berechneter Zeitgrenzen zum Ausführungszeitpunkt erfolgen.
      • +
      • Export von Datensätzen in ein File im CSV-Format
      • +
      • Import von Datensätzen aus File im CSV-Format
      • +
      • Umbenennen von Device/Readings in Datenbanksätzen
      • +
      • Ändern von Reading-Werten (VALUES) in der Datenbank (changeValue)
      • +
      • automatisches Umbenennen von Device-Namen in Datenbanksätzen und DbRep-Definitionen nach FHEM "rename" + Befehl (siehe DbRep-Agent)
      • +
      • Ausführen von beliebigen Benutzer spezifischen SQL-Kommandos (non-blocking)
      • +
      • Ausführen von beliebigen Benutzer spezifischen SQL-Kommandos (blocking) zur Verwendung in eigenem Code (dbValue)
      • +
      • Backups der FHEM-Datenbank im laufenden Betrieb erstellen (MySQL, SQLite)
      • +
      • senden des Dumpfiles zu einem FTP-Server nach dem Backup incl. Versionsverwaltung
      • +
      • Restore von SQLite- und MySQL-Dumps
      • +
      • Optimierung der angeschlossenen Datenbank (optimizeTables, vacuum)
      • +
      • Ausgabe der existierenden Datenbankprozesse (MySQL)
      • +
      • leeren der current-Tabelle
      • +
      • Auffüllen der current-Tabelle mit einem (einstellbaren) Extrakt der history-Tabelle
      • +
      • Bereinigung sequentiell aufeinander folgender Datensätze mit unterschiedlichen Zeitstempel aber gleichen Werten (sequentielle Dublettenbereinigung)
      • +
      • Reparatur einer korrupten SQLite Datenbank ("database disk image is malformed")
      • +
      • Übertragung von Datensätzen aus der Quelldatenbank in eine andere (Standby) Datenbank (syncStandby)
      • +
      • Reduktion der Anzahl von Datensätzen in der Datenbank (reduceLog)
      • +
    +
    + + Zur Aktivierung der Funktion Autorename wird dem definierten DbRep-Device mit dem Attribut "role" die Rolle "Agent" zugewiesen. Die Standardrolle nach Definition + ist "Client". Mehr ist dazu im Abschnitt DbRep-Agent beschrieben.

    + + DbRep stellt dem Nutzer einen UserExit zur Verfügung. Über diese Schnittstelle kann der Nutzer in Abhängigkeit von + frei definierbaren Reading/Value-Kombinationen (Regex) eigenen Code zur Ausführung bringen. Diese Schnittstelle arbeitet + unabhängig von einer Eventgenerierung. Weitere Informationen dazu ist unter Attribut + "userExitFn" beschrieben.

    + + Sobald ein DbRep-Device definiert ist, wird die Funktion DbReadingsVal zur Verfügung gestellt. + Mit dieser Funktion läßt sich, ähnlich dem allgemeinen ReadingsVal, der Wert eines Readings aus der Datenbank abrufen. + Die Funktionsausführung erfolgt blockierend. + Die Befehlssyntax ist:

    + +
      + DbReadingsVal("<name>","<device:reading>","<timestamp>","<default>")

      + + Beispiele:
      + $ret = DbReadingsVal("Rep.LogDB1","MyWetter:temperature","2018-01-13 08:00:00","");
      + attr <name> userReadings oldtemp {DbReadingsVal("Rep.LogDB1","MyWetter:temperature","2018-04-13 08:00:00","")} +

      + + + + + + + +
      <name> : Name des abzufragenden DbRep-Device
      <device:reading> : Device:Reading dessen Wert geliefert werden soll
      <timestamp> : Zeitpunkt des zu liefernden Readingwertes (*) in der Form "YYYY-MM-DD hh:mm:ss"
      <default> : Defaultwert falls kein Readingwert ermittelt werden konnte
      +
    +
    + (*) Es wird der zeitlich zu <timestamp> passendste Readingwert zurück geliefert, falls kein Wert exakt zu dem + angegebenen Zeitpunkt geloggt wurde. +

    + + FHEM-Forum:
    + Modul 93_DbRep - Reporting und Management von Datenbankinhalten (DbLog).

    + + FHEM-Wiki:
    + DbRep - Reporting und Management von DbLog-Datenbankinhalten.

    +
    +
+ +Voraussetzungen

+
    + Das Modul setzt den Einsatz einer oder mehrerer DbLog-Instanzen voraus. Es werden die Zugangsdaten dieser + Datenbankdefinition genutzt.
    + Es werden nur Inhalte der Tabelle "history" berücksichtigt wenn nichts anderes beschrieben ist.

    + + Überblick welche anderen Perl-Module DbRep verwendet:

    + + Net::FTP (nur wenn FTP-Transfer nach Datenbank-Dump genutzt wird)
    + Net::FTPSSL (nur wenn FTP-Transfer mit Verschlüsselung nach Datenbank-Dump genutzt wird)
    + POSIX
    + Time::HiRes
    + Time::Local
    + Scalar::Util
    + DBI
    + Color (FHEM-Modul)
    + IO::Compress::Gzip
    + IO::Uncompress::Gunzip
    + Blocking (FHEM-Modul)

    + + Aus Performancegründen sollten zusätzlich folgender Index erstellt werden:
    + + CREATE INDEX Report_Idx ON `history` (TIMESTAMP, READING) USING BTREE; + +
+
+ + +Definition + +
+
    + + define <name> DbRep <Name der DbLog-Instanz> + + +

    + (<Name der DbLog-Instanz> - es wird der Name der auszuwertenden DBLog-Datenbankdefinition angegeben nicht der Datenbankname selbst) + +
+ +

+ + +Set +
    + + Zur Zeit gibt es folgende Set-Kommandos. Über sie werden die Auswertungen angestoßen und definieren selbst die Auswertungsvariante. + Nach welchen Kriterien die Datenbankinhalte durchsucht werden und die Aggregation erfolgt, wird durch Attribute gesteuert. +

    + +
        +
      • averageValue [display | writeToDB] + - berechnet einen Durchschnittswert des Datenbankfelds "VALUE" in den + gegebenen Zeitgrenzen ( siehe Attribute). + Es muss das auszuwertende Reading über das Attribut "reading" + angegeben sein.
        + Mit dem Attribut "averageCalcForm" wird die Berechnungsvariante zur Mittelwertermittlung definiert. + Ist keine oder die Option "display" angegeben, werden die Ergebnisse nur angezeigt. Mit + der Option "writeToDB" werden die Berechnungsergebnisse mit einem neuen Readingnamen + in der Datenbank gespeichert.
        + Der neue Readingname wird aus einem Präfix und dem originalen Readingnamen gebildet, + wobei der originale Readingname durch das Attribut "readingNameMap" ersetzt werden kann. + Der Präfix setzt sich aus der Bildungsfunktion und der Aggregation zusammen.
        + Der Timestamp der neuen Readings in der Datenbank wird von der eingestellten Aggregationsperiode + abgeleitet, sofern kein eindeutiger Zeitpunkt des Ergebnisses bestimmt werden kann. + Das Feld "EVENT" wird mit "calculated" gefüllt.

        + +
          + Beispiel neuer Readingname gebildet aus dem Originalreading "totalpac":
          + avgam_day_totalpac
          + # <Bildungsfunktion>_<Aggregation>_<Originalreading>
          +
          +
        + +
      • cancelDump - bricht einen laufenden Datenbankdump ab.

      • + +
      • changeValue - ändert den gespeicherten Wert eines Readings. + Ist die Selektion auf bestimmte Device/Reading-Kombinationen durch die + Attribute "device" bzw. "reading" beschränkt, werden sie genauso + berücksichtigt wie gesetzte Zeitgrenzen (Attribute time.*).
        + Fehlen diese Beschränkungen, wird die gesamte Datenbank durchsucht und der angegebene Wert + geändert.

        + +
          + Syntax:
          + set <name> changeValue "<alter String>","<neuer String>"

          + + Die Strings werden in Doppelstrich eingeschlossen und durch Komma getrennt. + Dabei kann "String" sein:
          + +
          +<alter String> : * ein einfacher String mit/ohne Leerzeichen, z.B. "OL 12"
          +                 * ein String mit Verwendung von SQL-Wildcard, z.B. "%OL%"
          +                 
          +<neuer String> : * ein einfacher String mit/ohne Leerzeichen, z.B. "12 kWh"
          +                 * Perl Code eingeschlossen in "{}" inkl. Quotes, z.B. "{($VALUE,$UNIT) = split(" ",$VALUE)}". 
          +                   Dem Perl-Ausdruck werden die Variablen $VALUE und $UNIT übergeben. Sie können innerhalb
          +                   des Perl-Code geändert werden. Der zurückgebene Wert von $VALUE und $UNIT wird in dem Feld 
          +                   VALUE bzw. UNIT des Datensatzes gespeichert.                        
          +
          + + Beispiele:
          + set <name> changeValue "OL","12 OL"
          + # der alte Feldwert "OL" wird in "12 OL" geändert.

          + + set <name> changeValue "%OL%","12 OL"
          + # enthält das Feld VALUE den Teilstring "OL", wird es in "12 OL" geändert.

          + + set <name> changeValue "12 kWh","{($VALUE,$UNIT) = split(" ",$VALUE)}"
          + # der alte Feldwert "12 kWh" wird in VALUE=12 und UNIT=kWh gesplittet und in den Datenbankfeldern gespeichert

          + + set <name> changeValue "24%","{$VALUE = (split(" ",$VALUE))[0]}"
          + # beginnt der alte Feldwert mit "24", wird er gesplittet und VALUE=24 gespeichert (z.B. "24 kWh") +

          + + Zusammengefasst sind die zur Steuerung von changeValue relevanten Attribute:

          + +
            + + + + + + + +
            device : Selektion nur von Datensätzen die <device> enthalten
            reading : Selektion nur von Datensätzen die <reading> enthalten
            time.* : eine Reihe von Attributen zur Zeitabgrenzung
            executeBeforeProc : ausführen FHEM Kommando (oder perl-Routine) vor Start changeValue
            executeAfterProc : ausführen FHEM Kommando (oder perl-Routine) nach Ende changeValue
            +
          +
          +
          + + Hinweis:
          + Obwohl die Funktion selbst non-blocking ausgelegt ist, sollte das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEMWEB zu vermeiden (Tabellen-Lock).

          +
          +
        + +
      • countEntries [history | current] + - liefert die Anzahl der Tabelleneinträge (default: history) in den gegebenen + Zeitgrenzen (siehe Attribute). + Sind die Timestamps nicht gesetzt werden alle Einträge gezählt. + Beschränkungen durch die Attribute Device bzw. Reading + gehen in die Selektion mit ein.

      • + +
      • delEntries - löscht alle oder die durch die Attribute device und/oder + reading definierten Datenbankeinträge. Die Eingrenzung über Timestamps erfolgt + folgendermaßen:

        + +
          + "timestamp_begin" gesetzt -> gelöscht werden DB-Einträge ab diesem Zeitpunkt bis zum aktuellen Datum/Zeit
          + "timestamp_end" gesetzt -> gelöscht werden DB-Einträge bis bis zu diesem Zeitpunkt
          + beide Timestamps gesetzt -> gelöscht werden DB-Einträge zwischen diesen Zeitpunkten
          + "timeOlderThan" gesetzt -> gelöscht werden DB-Einträge älter als aktuelle Zeit minus "timeOlderThan"
          + "timeDiffToNow" gesetzt -> gelöscht werden DB-Einträge ab aktueller Zeit minus "timeDiffToNow" bis jetzt
          + +
          + Aus Sicherheitsgründen muss das Attribut "allowDeletion" + gesetzt sein um die Löschfunktion freizuschalten.

          + + Die zur Steuerung von delEntries relevanten Attribute:

          + +
            + + + + + + + + +
            allowDeletion : Freischaltung der Löschfunktion
            device : Selektion nur von Datensätzen die <device> enthalten
            reading : Selektion nur von Datensätzen die <reading> enthalten
            time.* : eine Reihe von Attributen zur Zeitabgrenzung
            executeBeforeProc : ausführen FHEM Kommando (oder perl-Routine) vor Start delEntries
            executeAfterProc : ausführen FHEM Kommando (oder perl-Routine) nach Ende delEntries
            +
          +
          +
          + + +
          +
        + +
      • delSeqDoublets [adviceRemain | adviceDelete | delete] - zeigt bzw. löscht aufeinander folgende identische Datensätze. + Dazu wird Device,Reading und Value ausgewertet. Nicht gelöscht werden der erste und der letzte Datensatz + einer Aggregationsperiode (z.B. hour, day, week usw.) sowie die Datensätze vor oder nach einem Wertewechsel + (Datenbankfeld VALUE).
        + Die Attribute zur Aggregation,Zeit-,Device- und Reading-Abgrenzung werden dabei + berücksichtigt. Ist das Attribut "aggregation" nicht oder auf "no" gesetzt, wird als Standard die Aggregation + "day" verwendet. Für Datensätze mit numerischen Werten kann mit dem Attribut + "seqDoubletsVariance" eine Abweichung eingestellt werden, bis zu der aufeinander folgende numerische Werte als + identisch angesehen und gelöscht werden sollen. +

        + +
          + + + + + +
          adviceRemain : simuliert die nach der Operation in der DB verbleibenden Datensätze (es wird nichts gelöscht !)
          adviceDelete : simuliert die zu löschenden Datensätze (es wird nichts gelöscht !)
          delete : löscht die sequentiellen Dubletten (siehe Beispiel)
          +
        +
        + + Aus Sicherheitsgründen muss das Attribut "allowDeletion" für die "delete" Option + gesetzt sein.
        + Die Anzahl der anzuzeigenden Datensätze der Kommandos "delSeqDoublets adviceDelete", "delSeqDoublets adviceRemain" ist + zunächst begrenzt (default 1000) und kann durch das Attribut "limit" angepasst werden. + Die Einstellung von "limit" hat keinen Einfluss auf die "delSeqDoublets delete" Funktion, sondern beeinflusst NUR die + Anzeige der Daten.
        + Vor und nach der Ausführung von "delSeqDoublets" kann ein FHEM-Kommando bzw. Perl-Routine ausgeführt werden. + (siehe Attribute "executeBeforeProc", "executeAfterProc") +

        + +
          + Beispiel - die nach Verwendung der delete-Option in der DB verbleibenden Datensätze sind fett + gekennzeichnet:

          +
            + 2017-11-25_00-00-05__eg.az.fridge_Pwr__power 0
            + 2017-11-25_00-02-26__eg.az.fridge_Pwr__power 0
            + 2017-11-25_00-04-33__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-06-10__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-08-21__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-08-59__eg.az.fridge_Pwr__power 60.32
            + 2017-11-25_01-11-21__eg.az.fridge_Pwr__power 56.26
            + 2017-11-25_01-27-54__eg.az.fridge_Pwr__power 6.19
            + 2017-11-25_01-28-51__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-31-00__eg.az.fridge_Pwr__power 0
            + 2017-11-25_01-33-59__eg.az.fridge_Pwr__power 0
            + 2017-11-25_02-39-29__eg.az.fridge_Pwr__power 0
            + 2017-11-25_02-41-18__eg.az.fridge_Pwr__power 105.28
            + 2017-11-25_02-41-26__eg.az.fridge_Pwr__power 61.52
            + 2017-11-25_03-00-06__eg.az.fridge_Pwr__power 47.46
            + 2017-11-25_03-00-33__eg.az.fridge_Pwr__power 0
            + 2017-11-25_03-02-07__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-37-42__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-40-10__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-42-24__eg.az.fridge_Pwr__power 1
            + 2017-11-25_23-42-24__eg.az.fridge_Pwr__power 1
            + 2017-11-25_23-45-27__eg.az.fridge_Pwr__power 1
            + 2017-11-25_23-47-07__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-55-27__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-48-15__eg.az.fridge_Pwr__power 0
            + 2017-11-25_23-50-21__eg.az.fridge_Pwr__power 59.1
            + 2017-11-25_23-55-14__eg.az.fridge_Pwr__power 52.31
            + 2017-11-25_23-58-09__eg.az.fridge_Pwr__power 51.73
            +
          +
        + +
      • +
        +
        + +
      • deviceRename - benennt den Namen eines Device innerhalb der angeschlossenen Datenbank (Internal + DATABASE) um. + Der Gerätename wird immer in der gesamten Datenbank umgesetzt. Eventuell gesetzte + Zeitgrenzen oder Beschränkungen durch die Attribute Device bzw. + Reading werden nicht berücksichtigt.

        + +
          + Beispiel:
          + set <name> deviceRename ST_5000,ST5100
          + # Die Anzahl der umbenannten Device-Datensätze wird im Reading "device_renamed" ausgegeben.
          + # Wird der umzubenennende Gerätename in der Datenbank nicht gefunden, wird eine WARNUNG im Reading "device_not_renamed" ausgegeben.
          + # Entsprechende Einträge erfolgen auch im Logfile mit verbose=3 +

          + + Hinweis:
          + Obwohl die Funktion selbst non-blocking ausgelegt ist, sollte das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEMWEB zu vermeiden (Tabellen-Lock).

          +
          +
        + +
      • diffValue [display | writeToDB] + - berechnet den Differenzwert des Datenbankfelds "VALUE" in den Zeitgrenzen (Attribute) "timestamp_begin", "timestamp_end" bzw "timeDiffToNow / timeOlderThan". + Es muss das auszuwertende Reading im Attribut "reading" angegeben sein. + Diese Funktion ist z.B. zur Auswertung von Eventloggings sinnvoll, deren Werte sich fortlaufend erhöhen und keine Wertdifferenzen wegschreiben.
        + Es wird immer die Differenz aus dem Value-Wert des ersten verfügbaren Datensatzes und dem Value-Wert des letzten verfügbaren Datensatzes innerhalb der angegebenen + Zeitgrenzen/Aggregation gebildet, wobei ein Übertragswert der Vorperiode (Aggregation) zur darauf folgenden Aggregationsperiode + berücksichtigt wird sofern diese einen Value-Wert enhtält.
        + Dabei wird ein Zählerüberlauf (Neubeginn bei 0) mit berücksichtigt (vergleiche Attribut "diffAccept").
        + Wird in einer auszuwertenden Zeit- bzw. Aggregationsperiode nur ein Datensatz gefunden, kann die Differenz in Verbindung mit dem + Differenzübertrag der Vorperiode berechnet werden. in diesem Fall kann es zu einer logischen Ungenauigkeit in der Zuordnung der Differenz + zu der Aggregationsperiode kommen. Deswegen wird eine Warnung im "state" und das + Reading "less_data_in_period" mit einer Liste der betroffenen Perioden wird erzeugt.

        + +
          + Hinweis:
          + Im Auswertungs- bzw. Aggregationszeitraum (Tag, Woche, Monat, etc.) sollten dem Modul pro Periode mindestens ein Datensatz + zu Beginn und ein Datensatz gegen Ende des Aggregationszeitraumes zur Verfügung stehen um eine möglichst genaue Auswertung + der Differenzwerte vornehmen zu können. +
          +
          +
        + Ist keine oder die Option "display" angegeben, werden die Ergebnisse nur angezeigt. Mit + der Option "writeToDB" werden die Berechnungsergebnisse mit einem neuen Readingnamen + in der Datenbank gespeichert.
        + Der neue Readingname wird aus einem Präfix und dem originalen Readingnamen gebildet, + wobei der originale Readingname durch das Attribut "readingNameMap" ersetzt werden kann. + Der Präfix setzt sich aus der Bildungsfunktion und der Aggregation zusammen.
        + Der Timestamp der neuen Readings in der Datenbank wird von der eingestellten Aggregationsperiode + abgeleitet, sofern kein eindeutiger Zeitpunkt des Ergebnisses bestimmt werden kann. + Das Feld "EVENT" wird mit "calculated" gefüllt.

        + +
          + Beispiel neuer Readingname gebildet aus dem Originalreading "totalpac":
          + diff_day_totalpac
          + # <Bildungsfunktion>_<Aggregation>_<Originalreading>
          +
          +
        + +
      • dumpMySQL [clientSide | serverSide] + - erstellt einen Dump der angeschlossenen MySQL-Datenbank.
        + Abhängig von der ausgewählten Option wird der Dump auf der Client- bzw. Serverseite erstellt.
        + Die Varianten unterscheiden sich hinsichtlich des ausführenden Systems, des Erstellungsortes, der + Attributverwendung, des erzielten Ergebnisses und der benötigten Hardwareressourcen.
        + Die Option "clientSide" benötigt z.B. eine leistungsfähigere Hardware des FHEM-Servers, sichert aber alle + Tabellen inklusive eventuell angelegter Views.
        + Mit dem Attribut "dumpCompress" kann eine Komprimierung der erstellten Dumpfiles eingeschaltet werden. +

        + +
          + Option clientSide
          + Der Dump wird durch den Client (FHEM-Rechner) erstellt und per default im log-Verzeichnis des Clients + gespeichert. + Das Zielverzeichnis kann mit dem Attribut "dumpDirLocal" verändert werden und muß auf + dem Client durch FHEM beschreibbar sein.
          + Vor dem Dump kann eine Tabellenoptimierung (Attribut "optimizeTablesBeforeDump") oder ein FHEM-Kommando + (Attribut "executeBeforeProc") optional zugeschaltet werden. + Nach dem Dump kann ebenfalls ein FHEM-Kommando (siehe Attribut "executeAfterProc") ausgeführt werden.

          + + Achtung !
          + Um ein Blockieren von FHEM zu vermeiden, muß DbLog im asynchronen Modus betrieben werden wenn die + Tabellenoptimierung verwendet wird !


          + + Über die Attribute "dumpMemlimit" und "dumpSpeed" kann das Laufzeitverhalten der + Funktion beeinflusst werden um eine Optimierung bezüglich Performance und Ressourcenbedarf zu erreichen.

          + + Die für "dumpMySQL clientSide" relevanten Attribute sind:

          +
            + + + + + + + + + + + +
            dumpComment : User-Kommentar im Dumpfile
            dumpCompress : Komprimierung des Dumpfiles nach der Erstellung
            dumpDirLocal : das lokale Zielverzeichnis für die Erstellung des Dump
            dumpMemlimit : Begrenzung der Speicherverwendung
            dumpSpeed : Begrenzung die CPU-Belastung
            dumpFilesKeep : Anzahl der aufzubwahrenden Dumpfiles
            executeBeforeProc : ausführen FHEM Kommando (oder perl-Routine) vor dem Dump
            executeAfterProc : ausführen FHEM Kommando (oder perl-Routine) nach dem Dump
            optimizeTablesBeforeDump : Tabelloptimierung vor dem Dump ausführen
            +
          +
          + + Nach einem erfolgreichen Dump werden alte Dumpfiles gelöscht und nur die Anzahl Files, definiert durch + das Attribut "dumpFilesKeep" (default: 3), verbleibt im Zielverzeichnis "dumpDirLocal". Falls "dumpFilesKeep = 0" + gesetzt ist, werden alle Dumpfiles (auch das aktuell erstellte File), gelöscht. + Diese Einstellung kann sinnvoll sein, wenn FTP aktiviert ist + und die erzeugten Dumps nur im FTP-Zielverzeichnis erhalten bleiben sollen.

          + + Die Namenskonvention der Dumpfiles ist: <dbname>_<date>_<time>.sql[.gzip]

          + + Um die Datenbank aus dem Dumpfile wiederherzustellen kann das Kommmando:

          + +
            + set <name> restoreMySQL <filename>

            +
          + + verwendet werden.

          + + Das erzeugte Dumpfile (unkomprimiert) kann ebenfalls mit:

          + +
            + mysql -u <user> -p <dbname> < <filename>.sql

            +
          + + auf dem MySQL-Server ausgeführt werden um die Datenbank aus dem Dump wiederherzustellen.

          +
          + + Option serverSide
          + Der Dump wird durch den MySQL-Server erstellt und per default im Home-Verzeichnis des MySQL-Servers + gespeichert.
          + Es wird die gesamte history-Tabelle (nicht current-Tabelle) im CSV-Format ohne + Einschränkungen exportiert.
          + Vor dem Dump kann eine Tabellenoptimierung (Attribut "optimizeTablesBeforeDump") + optional zugeschaltet werden .

          + + Achtung !
          + Um ein Blockieren von FHEM zu vermeiden, muß DbLog im asynchronen Modus betrieben werden wenn die + Tabellenoptimierung verwendet wird !


          + + Vor und nach dem Dump kann ein FHEM-Kommando (siehe Attribute "executeBeforeProc", "executeAfterProc") ausgeführt + werden.

          + + Die für "dumpMySQL serverSide" relevanten Attribute sind:

          +
            + + + + + + + + + +
            dumpDirRemote : das Erstellungsverzeichnis des Dumpfile auf dem entfernten Server
            dumpCompress : Komprimierung des Dumpfiles nach der Erstellung
            dumpDirLocal : Directory des lokal gemounteten dumpDirRemote-Verzeichnisses
            dumpFilesKeep : Anzahl der aufzubwahrenden Dumpfiles
            executeBeforeProc : ausführen FHEM Kommando (oder perl-Routine) vor dem Dump
            executeAfterProc : ausführen FHEM Kommando (oder perl-Routine) nach dem Dump
            optimizeTablesBeforeDump : Tabelloptimierung vor dem Dump ausführen
            +
          +
          + + Das Zielverzeichnis kann mit dem Attribut "dumpDirRemote" verändert werden. + Es muß sich auf dem MySQL-Host gefinden und durch den MySQL-Serverprozess beschreibbar sein.
          + Der verwendete Datenbankuser benötigt das "FILE"-Privileg.

          + + Hinweis:
          + Soll die interne Versionsverwaltung und die Dumpfilekompression des Moduls genutzt, sowie die Größe des erzeugten + Dumpfiles ausgegeben werden, ist das Verzeichnis "dumpDirRemote" des MySQL-Servers auf dem Client zu mounten + und im Attribut "dumpDirLocal" dem DbRep-Device bekannt zu machen.
          + Gleiches gilt wenn der FTP-Transfer nach dem Dump genutzt werden soll (Attribut "ftpUse" bzw. "ftpUseSSL"). +

          + +
            + Beispiel:
            + attr <name> dumpDirRemote /volume1/ApplicationBackup/dumps_FHEM/
            + attr <name> dumpDirLocal /sds1/backup/dumps_FHEM/
            + attr <name> dumpFilesKeep 2

            + + # Der Dump wird remote auf dem MySQL-Server im Verzeichnis '/volume1/ApplicationBackup/dumps_FHEM/' + erstellt.
            + # Die interne Versionsverwaltung sucht im lokal gemounteten Verzeichnis '/sds1/backup/dumps_FHEM/' + vorhandene Dumpfiles und löscht diese bis auf die zwei letzten Versionen.
            +
            +
          + + Wird die interne Versionsverwaltung genutzt, werden nach einem erfolgreichen Dump alte Dumpfiles gelöscht + und nur die Anzahl "dumpFilesKeep" (default: 3) verbleibt im Zielverzeichnis "dumpDirRemote". + FHEM benötigt in diesem Fall Schreibrechte auf dem Verzeichnis "dumpDirLocal".

          + + Die Namenskonvention der Dumpfiles ist: <dbname>_<date>_<time>.csv[.gzip]

          + + Ein Restore der Datenbank aus diesem Backup kann durch den Befehl:

          +
            + set <name> <restoreMySQL> <filename>.csv[.gzip]

            +
          + + gestartet werden.

          + + + FTP Transfer nach Dump
          + Wenn diese Möglichkeit genutzt werden soll, ist das Attribut "ftpUse" oder + "ftpUseSSL" zu setzen. Letzteres gilt wenn eine verschlüsselte Übertragung genutzt werden soll.
          + Das Modul übernimmt ebenfalls die Versionierung der Dumpfiles im FTP-Zielverzeichnis mit Hilfe des Attributes + "ftpDumpFilesKeep". + Für die FTP-Übertragung relevante Attribute sind:

          + +
            + + + + + + + + + + + + + +
            ftpUse : FTP Transfer nach dem Dump wird eingeschaltet (ohne SSL Verschlüsselung)
            ftpUser : User zur Anmeldung am FTP-Server, default: anonymous
            ftpUseSSL : FTP Transfer mit SSL Verschlüsselung nach dem Dump wird eingeschaltet
            ftpDebug : Debugging des FTP Verkehrs zur Fehlersuche
            ftpDir : Verzeichnis auf dem FTP-Server in welches das File übertragen werden soll (default: "/")
            ftpDumpFilesKeep : Es wird die angegebene Anzahl Dumpfiles im <ftpDir> belassen (default: 3)
            ftpPassive : setzen wenn passives FTP verwendet werden soll
            ftpPort : FTP-Port, default: 21
            ftpPwd : Passwort des FTP-Users, default nicht gesetzt
            ftpServer : Name oder IP-Adresse des FTP-Servers. notwendig !
            ftpTimeout : Timeout für die FTP-Verbindung in Sekunden (default: 30).
            +
          +
          +
          + +
        +

      • + +
      • dumpSQLite - erstellt einen Dump der angeschlossenen SQLite-Datenbank.
        + Diese Funktion nutzt die SQLite Online Backup API und ermöglicht es konsistente Backups der SQLite-DB + in laufenden Betrieb zu erstellen. + Der Dump wird per default im log-Verzeichnis des FHEM-Rechners gespeichert. + Das Zielverzeichnis kann mit dem Attribut "dumpDirLocal" verändert werden und muß + durch FHEM beschreibbar sein. + Vor dem Dump kann optional eine Tabellenoptimierung (Attribut "optimizeTablesBeforeDump") + zugeschaltet werden. +

        + + Achtung !
        + Um ein Blockieren von FHEM zu vermeiden, muß DbLog im asynchronen Modus betrieben werden wenn die + Tabellenoptimierung verwendet wird !


        + + Vor und nach dem Dump kann ein FHEM-Kommando (siehe Attribute "executeBeforeProc", "executeAfterProc") + ausgeführt werden.

        + + Die für diese Funktion relevanten Attribute sind:

        +
          + + + + + + + + +
          dumpCompress : Komprimierung des Dumpfiles nach der Erstellung
          dumpDirLocal : Directory des lokal gemounteten dumpDirRemote-Verzeichnisses
          dumpFilesKeep : Anzahl der aufzubwahrenden Dumpfiles
          executeBeforeProc : ausführen FHEM Kommando (oder perl-Routine) vor dem Dump
          executeAfterProc : ausführen FHEM Kommando (oder perl-Routine) nach dem Dump
          optimizeTablesBeforeDump : Tabelloptimierung vor dem Dump ausführen
          +
        +
        + + Nach einem erfolgreichen Dump werden alte Dumpfiles gelöscht und nur die Anzahl Files, definiert durch das + Attribut "dumpFilesKeep" (default: 3), verbleibt im Zielverzeichnis "dumpDirLocal". Falls "dumpFilesKeep = 0" gesetzt, werden + alle Dumpfiles (auch das aktuell erstellte File), gelöscht. Diese Einstellung kann sinnvoll sein, wenn FTP aktiviert ist + und die erzeugten Dumps nur im FTP-Zielverzeichnis erhalten bleiben sollen.

        + + Die Namenskonvention der Dumpfiles ist: <dbname>_<date>_<time>.sqlitebkp[.gzip]

        + + Die Datenbank kann mit "set <name> restoreSQLite <Filename>" wiederhergestellt + werden.
        + Das erstellte Dumpfile kann auf einen FTP-Server übertragen werden. Siehe dazu die Erläuterungen + unter "dumpMySQL".

        +

      • + +
      • eraseReadings - Löscht alle angelegten Readings im Device, außer dem Reading "state" und Readings, die in der + Ausnahmeliste definiert mit Attribut "readingPreventFromDel" enthalten sind. +

      • + +
      • exportToFile [<File>] + - exportiert DB-Einträge im CSV-Format in den gegebenen Zeitgrenzen.
        + Einschränkungen durch die Attribute "device" bzw. "reading" gehen in die Selektion mit ein. + Der Dateiname wird durch das Attribut "expimpfile" bestimmt.
        + Alternativ kann die Datei (/Pfad/Datei) als Kommando-Option angegeben werden und übersteuert ein + eventuell gesetztes Attribut "expimpfile". Der Dateiname kann Wildcards enthalten (siehe Attribut "expimpfile"). +
        + Durch das Attribut "aggregation" wird der Export der Datensätze in Zeitscheiben der angegebenen Aggregation + vorgenommen. Ist z.B. "aggregation = month" gesetzt, werden die Daten in monatlichen Paketen selektiert und in + das Exportfile geschrieben. Dadurch wird die Hauptspeicherverwendung optimiert wenn sehr große Datenmengen + exportiert werden sollen und vermeidet den "died prematurely" Abbruchfehler.

        + + Die für diese Funktion relevanten Attribute sind:

        +
          + + + + + + + + + +
          aggregation : Festlegung der Selektionspaketierung
          device : Einschränkung des Exports auf ein bestimmtes Device
          reading : Einschränkung des Exports auf ein bestimmtes Reading
          executeBeforeProc : FHEM Kommando (oder perl-Routine) vor dem Export ausführen
          executeAfterProc : FHEM Kommando (oder perl-Routine) nach dem Export ausführen
          expimpfile : der Name des Exportfiles
          time.* : eine Reihe von Attributen zur Zeitabgrenzung
          +
        + +

      • + +
      • fetchrows [history|current] + - liefert alle Tabelleneinträge (default: history) + in den gegebenen Zeitgrenzen bzw. Selektionsbedingungen durch die Attribute + "device" und "reading". + Eine evtl. gesetzte Aggregation wird dabei nicht berücksichtigt.
        + Die Leserichtung in der Datenbank kann durch das Attribut + "fetchRoute" bestimmt werden.

        + + Jedes Ergebnisreading setzt sich aus dem Timestring des Datensatzes, einem Index, dem Device + und dem Reading zusammen. + Die Funktion fetchrows ist in der Lage mehrfach vorkommende Datensätze (Dubletten) zu erkennen. + Solche Dubletten sind mit einem Index > 1 gekennzeichnet.
        + Dubletten können mit dem Attribut "fetchMarkDuplicates" farblich hervorgehoben werden.

        + + Hinweis:
        + Hervorgehobene Readings werden nach einem Restart bzw. nach rereadcfg nicht mehr angezeigt da + sie nicht im statefile gesichert werden (Verletzung erlaubter Readingnamen durch Formatierung). +

        + + Dieses Attribut ist mit einigen Farben vorbelegt, kann aber mit dem colorpicker-Widget + überschrieben werden:

        + +
          + + attr <name> widgetOverride fetchMarkDuplicates:colorpicker + +
        +
        + + Die Ergebnisreadings von fetchrows sind nach folgendem Schema aufgebaut:

        + +
          + Beispiel:
          + 2017-10-22_03-04-43__1__SMA_Energymeter__Bezug_WirkP_Kosten_Diff
          + # <Datum>_<Zeit>__<Index>__<Device>__<Reading> +
        +
        + + Zur besseren Übersicht sind die zur Steuerung von fetchrows relevanten Attribute hier noch einmal + dargestellt:

        + +
          + + + + + + + + + +
          fetchRoute : Leserichtung des Selekts innerhalb der Datenbank
          limit : begrenzt die Anzahl zu selektierenden bzw. anzuzeigenden Datensätze
          fetchMarkDuplicates : Hervorhebung von gefundenen Dubletten
          device : Selektion nur von Datensätzen die <device> enthalten
          reading : Selektion nur von Datensätzen die <reading> enthalten
          time.* : eine Reihe von Attributen zur Zeitabgrenzung
          valueFilter : filtert die anzuzeigenden Datensätze mit einem regulären Ausdruck
          +
        +
        +
        + + Hinweis:
        + Auch wenn das Modul bezüglich der Datenbankabfrage nichtblockierend arbeitet, kann eine + zu große Ergebnismenge (Anzahl Zeilen bzw. Readings) die Browsersesssion bzw. FHEMWEB + blockieren. Aus diesem Grund wird die Ergebnismenge mit dem + Attribut "limit" begrenzt. Bei Bedarf kann dieses Attribut + geändert werden, falls eine Anpassung der Selektionsbedingungen nicht möglich oder + gewünscht ist.

        +

      • + +
      • insert - Manuelles Einfügen eines Datensatzes in die Tabelle "history". Obligatorisch sind Eingabewerte für Datum, Zeit und Value. + Die Werte für die DB-Felder Type bzw. Event werden mit "manual" gefüllt, sowie die Werte für Device, Reading aus den gesetzten Attributen genommen.

        + +
          + Eingabeformat: Datum,Zeit,Value,[Unit]
          + # Unit ist optional, Attribute "reading" und "device" müssen gesetzt sein
          + # Soll "Value=0" eingefügt werden, ist "Value = 0.0" zu verwenden.

          + + Beispiel: 2016-08-01,23:00:09,TestValue,TestUnit
          + # Es sind KEINE Leerzeichen im Feldwert erlaubt !
          +
          + + Hinweis:
          + Bei der Eingabe ist darauf zu achten dass im beabsichtigten Aggregationszeitraum (Tag, Woche, Monat, etc.) MINDESTENS zwei + Datensätze für die Funktion diffValue zur Verfügung stehen. Ansonsten kann keine Differenz berechnet werden und diffValue + gibt in diesem Fall "0" in der betroffenen Periode aus ! +
          +
          + +
        + +
      • importFromFile [<File>] + - importiert Datensätze im CSV-Format aus einer Datei in die Datenbank.
        + Der Dateiname wird durch das Attribut "expimpfile" bestimmt.
        + Alternativ kann die Datei (/Pfad/Datei) als Kommando-Option angegeben werden und übersteuert ein + eventuell gesetztes Attribut "expimpfile". Der Dateiname kann Wildcards enthalten (siehe + Attribut "expimpfile").

        + +
          + Datensatzformat:
          + "TIMESTAMP","DEVICE","TYPE","EVENT","READING","VALUE","UNIT"

          + # Die Felder "TIMESTAMP","DEVICE","TYPE","EVENT","READING" und "VALUE" müssen gesetzt sein. Das Feld "UNIT" ist optional. + Der Fileinhalt wird als Transaktion importiert, d.h. es wird der Inhalt des gesamten Files oder, im Fehlerfall, kein Datensatz des Files importiert. + Wird eine umfangreiche Datei mit vielen Datensätzen importiert, sollte KEIN verbose=5 gesetzt werden. Es würden in diesem Fall sehr viele Sätze in + das Logfile geschrieben werden was FHEM blockieren oder überlasten könnte.

          + + Beispiel:
          + "2016-09-25 08:53:56","STP_5000","SMAUTILS","etotal: 11859.573","etotal","11859.573",""
          +
          + + Die für diese Funktion relevanten Attribute sind:

          +
            + + + + + +
            executeBeforeProc : FHEM Kommando (oder perl-Routine) vor dem Import ausführen
            executeAfterProc : FHEM Kommando (oder perl-Routine) nach dem Import ausführen
            expimpfile : der Name des Importfiles
            +
          +
          +
        +
        + +
      • maxValue [display | writeToDB] + - berechnet den Maximalwert des Datenbankfelds "VALUE" in den Zeitgrenzen + (Attribute) "timestamp_begin", "timestamp_end" bzw. "timeDiffToNow / timeOlderThan". + Es muss das auszuwertende Reading über das Attribut "reading" + angegeben sein. + Die Auswertung enthält den Zeitstempel des ermittelten Maximumwertes innerhalb der + Aggregation bzw. Zeitgrenzen. + Im Reading wird der Zeitstempel des letzten Auftretens vom Maximalwert ausgegeben + falls dieser Wert im Intervall mehrfach erreicht wird.
        + + Ist keine oder die Option "display" angegeben, werden die Ergebnisse nur angezeigt. Mit + der Option "writeToDB" werden die Berechnungsergebnisse mit einem neuen Readingnamen + in der Datenbank gespeichert.
        + Der neue Readingname wird aus einem Präfix und dem originalen Readingnamen gebildet, + wobei der originale Readingname durch das Attribut "readingNameMap" ersetzt werden kann. + Der Präfix setzt sich aus der Bildungsfunktion und der Aggregation zusammen.
        + Der Timestamp der neuen Readings in der Datenbank wird von der eingestellten Aggregationsperiode + abgeleitet, sofern kein eindeutiger Zeitpunkt des Ergebnisses bestimmt werden kann. + Das Feld "EVENT" wird mit "calculated" gefüllt.

        + +
          + Beispiel neuer Readingname gebildet aus dem Originalreading "totalpac":
          + max_day_totalpac
          + # <Bildungsfunktion>_<Aggregation>_<Originalreading>
          +
          +
        + +
      • minValue [display | writeToDB] + - berechnet den Minimalwert des Datenbankfelds "VALUE" in den Zeitgrenzen + (Attribute) "timestamp_begin", "timestamp_end" bzw. "timeDiffToNow / timeOlderThan". + Es muss das auszuwertende Reading über das Attribut "reading" + angegeben sein. + Die Auswertung enthält den Zeitstempel des ermittelten Minimumwertes innerhalb der + Aggregation bzw. Zeitgrenzen. + Im Reading wird der Zeitstempel des ersten Auftretens vom Minimalwert ausgegeben + falls dieser Wert im Intervall mehrfach erreicht wird.
        + + Ist keine oder die Option "display" angegeben, werden die Ergebnisse nur angezeigt. Mit + der Option "writeToDB" werden die Berechnungsergebnisse mit einem neuen Readingnamen + in der Datenbank gespeichert.
        + Der neue Readingname wird aus einem Präfix und dem originalen Readingnamen gebildet, + wobei der originale Readingname durch das Attribut "readingNameMap" ersetzt werden kann. + Der Präfix setzt sich aus der Bildungsfunktion und der Aggregation zusammen.
        + Der Timestamp der neuen Readings in der Datenbank wird von der eingestellten Aggregationsperiode + abgeleitet, sofern kein eindeutiger Zeitpunkt des Ergebnisses bestimmt werden kann. + Das Feld "EVENT" wird mit "calculated" gefüllt.

        + +
          + Beispiel neuer Readingname gebildet aus dem Originalreading "totalpac":
          + min_day_totalpac
          + # <Bildungsfunktion>_<Aggregation>_<Originalreading>
          +
          +
        + +
      • optimizeTables - optimiert die Tabellen in der angeschlossenen Datenbank (MySQL).
        + Vor und nach der Optimierung kann ein FHEM-Kommando ausgeführt werden. + (siehe Attribute "executeBeforeProc", "executeAfterProc") +

        + +
          + Hinweis:
          + Obwohl die Funktion selbst non-blocking ausgelegt ist, muß das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEMWEB zu vermeiden.

          + +

        + +
      • readingRename - benennt den Namen eines Readings innerhalb der angeschlossenen Datenbank (siehe Internal DATABASE) um. + Der Readingname wird immer in der gesamten Datenbank umgesetzt. Eventuell + gesetzte Zeitgrenzen oder Beschränkungen durch die Attribute + Device bzw. Reading werden nicht berücksichtigt.

        + +
          + Beispiel:
          + set <name> readingRename <alter Readingname>,<neuer Readingname>
          + # Die Anzahl der umbenannten Device-Datensätze wird im Reading "reading_renamed" + ausgegeben.
          + # Wird der umzubenennende Readingname in der Datenbank nicht gefunden, wird eine + WARNUNG im Reading "reading_not_renamed" ausgegeben.
          + # Entsprechende Einträge erfolgen auch im Logfile mit verbose=3. +

          + + Hinweis:
          + Obwohl die Funktion selbst non-blocking ausgelegt ist, sollte das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEMWEB zu vermeiden (Tabellen-Lock).

          +
          +
        + +
      • reduceLog [average[=day]] [exclude=device1:reading1,device2:reading2,...] [include=device:reading]
        + Reduziert historische Datensätze innerhalb der durch die "time.*"-Attribute bestimmten + Zeitgrenzen auf einen Eintrag (den ersten) pro Stunde je Device & Reading.
        + Es muss mindestens eines der "time.*"-Attribute gesetzt sein (siehe Tabelle unten). + Die jeweils fehlende Zeitabgrenzung wird in diesem Fall durch das Modul errechnet. +

        + + Die für diese Funktion relevanten Attribute sind:

        +
          + + + + + + + + +
          executeBeforeProc : FHEM Kommando (oder perl-Routine) vor dem Export ausführen
          executeAfterProc : FHEM Kommando (oder perl-Routine) nach dem Export ausführen
          timeOlderThan : es werden Datenbankeinträge älter als dieses Attribut reduziert
          timestamp_end : es werden Datenbankeinträge älter als dieses Attribut reduziert
          timeDiffToNow : es werden Datenbankeinträge neuer als dieses Attribut reduziert
          timestamp_begin : es werden Datenbankeinträge neuer als dieses Attribut reduziert
          +
        +
        + + Das Reading "reduceLogState" enthält das Ausführungsergebnis des letzten reduceLog-Befehls.

        + + Durch die optionale Angabe von 'average' wird nicht nur die Datenbank bereinigt, sondern + alle numerischen Werte einer Stunde werden auf einen einzigen Mittelwert reduziert.
        + Durch die optionale Angabe von 'average=day' wird nicht nur die Datenbank bereinigt, sondern + alle numerischen Werte eines Tages auf einen einzigen Mittelwert reduziert. + (impliziert 'average')

        + + Optional kann als letzer Parameter "exclude=device1:reading1,device2:reading2,...." + angegeben werden um device/reading Kombinationen von reduceLog auszuschließen.
        + Tipp: Wird "exclude=.*:.*" angegeben, wird nichts in der Datenbank gelöscht. Das kann + z.B. verwendet werden um vorab die gesetzten Zeitgrenzen und die Anzahl der zu bearbeitenden + Datenbankeinträge zu checken.

        + + Optional kann als letzer Parameter "include=device:reading" angegeben werden um + die auf die Datenbank ausgeführte SELECT-Abfrage einzugrenzen, was die RAM-Belastung + verringert und die Performance erhöht.

        + +
          + Beispiel:

          + + attr <name> timeOlderThan = d:200
          + set <name> reduceLog
          + # Datensätze die älter als 200 Tage sind, werden auf den ersten Eintrag pro Stunde je Device & Reading + reduziert.
          +
          + + attr <name> timeDiffToNow = d:10
          + attr <name> timeOlderThan = d:5
          + set <name> reduceLog average include=Luftdaten_remote:%
          + # Datensätze die älter als 5 und neuer als 10 Tage sind, werden bereinigt. Numerische Werte + einer Stunde werden auf einen Mittelwert reduziert
          +
          +
        + + Hinweis:
        + Obwohl die Funktion selbst non-blocking ausgelegt ist, sollte das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEMWEB zu vermeiden + (Tabellen-Lock).
        + Weiterhin wird dringend empfohlen den standard INDEX 'Search_Idx' in der Tabelle 'history' + anzulegen !
        + Die Abarbeitung dieses Befehls dauert unter Umständen (ohne INDEX) extrem lange.

        +

      • + +
      • repairSQLite - repariert eine korrupte SQLite-Datenbank.
        + Eine Korruption liegt im Allgemeinen vor wenn die Fehlermitteilung "database disk image is malformed" + im state des DbLog-Devices erscheint. + Wird dieses Kommando gestartet, wird das angeschlossene DbLog-Device zunächst automatisch für 10 Stunden + (36000 Sekunden) von der Datenbank getrennt (Trennungszeit). Nach Abschluss der Reparatur erfolgt + wieder eine sofortige Neuverbindung zur reparierten Datenbank.
        + Dem Befehl kann eine abweichende Trennungszeit (in Sekunden) als Argument angegeben werden.
        + Die korrupte Datenbank wird als <database>.corrupt im gleichen Verzeichnis gespeichert.

        + +
          + Beispiel:
          + set <name> repairSQLite
          + # Die Datenbank wird repariert, Trennungszeit beträgt 10 Stunden
          + set <name> repairSQLite 600
          + # Die Datenbank wird repariert, Trennungszeit beträgt 10 Minuten +

          + + Hinweis:
          + Es ist nicht garantiert, dass die Reparatur erfolgreich verläuft und keine Daten verloren gehen. + Je nach Schwere der Korruption kann Datenverlust auftreten oder die Reparatur scheitern, auch wenn + kein Fehler im Ablauf signalisiert wird. Ein Backup der Datenbank sollte unbedingt vorhanden + sein !

          +
          +
        + +
      • restoreMySQL <File> - stellt die Datenbank aus einem serverSide- oder clientSide-Dump wieder her.
        + Die Funktion stellt über eine Drop-Down Liste eine Dateiauswahl für den Restore zur Verfügung.

        + + Verwendung eines serverSide-Dumps
        + Es wird der Inhalt der history-Tabelle aus einem serverSide-Dump wiederhergestellt. + Dazu ist das Verzeichnis "dumpDirRemote" des MySQL-Servers auf dem Client zu mounten + und im Attribut "dumpDirLocal" dem DbRep-Device bekannt zu machen.
        + Es werden alle Files mit der Endung "csv[.gzip]" und deren Name mit der + verbundenen Datenbank beginnt (siehe Internal DATABASE), aufgelistet. +

        + + Verwendung eines clientSide-Dumps
        + Es werden alle Tabellen und eventuell vorhandenen Views wiederhergestellt. + Das Verzeichnis, in dem sich die Dump-Files befinden, ist im Attribut "dumpDirLocal" dem + DbRep-Device bekannt zu machen.
        + Es werden alle Files mit der Endung "sql[.gzip]" und deren Name mit der + verbundenen Datenbank beginnt (siehe Internal DATABASE), aufgelistet.
        + Die Geschwindigkeit des Restores ist abhängig von der Servervariable "max_allowed_packet". Durch Veränderung + dieser Variable im File my.cnf kann die Geschwindigkeit angepasst werden. Auf genügend verfügbare Ressourcen (insbesondere + RAM) ist dabei zu achten.

        + + Der Datenbankuser benötigt Rechte zum Tabellenmanagement, z.B.:
        + CREATE, ALTER, INDEX, DROP, SHOW VIEW, CREATE VIEW +

        +

      • + +
      • restoreSQLite <File>.sqlitebkp[.gzip] - stellt das Backup einer SQLite-Datenbank wieder her.
        + Die Funktion stellt über eine Drop-Down Liste die für den Restore zur Verfügung stehenden Dateien + zur Verfügung. Die aktuell in der Zieldatenbank enthaltenen Daten werden gelöscht bzw. + überschrieben. + Es werden alle Files mit der Endung "sqlitebkp[.gzip]" und deren Name mit dem Namen der + verbundenen Datenbank beginnt, aufgelistet .

        +

      • + +
      • sqlCmd - führt ein beliebiges Benutzer spezifisches Kommando aus.
        + Enthält dieses Kommando eine Delete-Operation, muss zur Sicherheit das + Attribut "allowDeletion" gesetzt sein.
        + Bei der Ausführung dieses Kommandos werden keine Einschränkungen durch gesetzte Attribute + "device", "reading", "time.*" bzw. "aggregation" berücksichtigt.
        + Sollen die im Modul gesetzten Attribute "timestamp_begin" bzw. + "timestamp_end" im Statement berücksichtigt werden, können die Platzhalter + "§timestamp_begin§" bzw. "§timestamp_end§" dafür verwendet werden.

        + + Soll ein Datensatz upgedated werden, ist dem Statement "TIMESTAMP=TIMESTAMP" hinzuzufügen um eine Änderung des + originalen Timestamps zu verhindern.

        + +
          + Beispiele für Statements:

          +
            +
          • set <name> sqlCmd select DEVICE, count(*) from history where TIMESTAMP >= "2017-01-06 00:00:00" group by DEVICE having count(*) > 800
          • +
          • set <name> sqlCmd select DEVICE, count(*) from history where TIMESTAMP >= "2017-05-06 00:00:00" group by DEVICE
          • +
          • set <name> sqlCmd select DEVICE, count(*) from history where TIMESTAMP >= §timestamp_begin§ group by DEVICE
          • +
          • set <name> sqlCmd select * from history where DEVICE like "Te%t" order by `TIMESTAMP` desc
          • +
          • set <name> sqlCmd select * from history where `TIMESTAMP` > "2017-05-09 18:03:00" order by `TIMESTAMP` desc
          • +
          • set <name> sqlCmd select * from current order by `TIMESTAMP` desc
          • +
          • set <name> sqlCmd select sum(VALUE) as 'Einspeisung am 04.05.2017', count(*) as 'Anzahl' FROM history where `READING` = "Einspeisung_WirkP_Zaehler_Diff" and TIMESTAMP between '2017-05-04' AND '2017-05-05'
          • +
          • set <name> sqlCmd delete from current
          • +
          • set <name> sqlCmd delete from history where TIMESTAMP < "2016-05-06 00:00:00"
          • +
          • set <name> sqlCmd update history set TIMESTAMP=TIMESTAMP,VALUE='Val' WHERE VALUE='TestValue'
          • +
          • set <name> sqlCmd select * from history where DEVICE = "Test"
          • +
          • set <name> sqlCmd insert into history (TIMESTAMP, DEVICE, TYPE, EVENT, READING, VALUE, UNIT) VALUES ('2017-05-09 17:00:14','Test','manuell','manuell','Tes§e','TestValue','°C')
          • +
          +
          + + Das Ergebnis des Statements wird im Reading "SqlResult" dargestellt. + Die Ergebnis-Formatierung kann durch das Attribut "sqlResultFormat" ausgewählt, sowie der verwendete + Feldtrenner durch das Attribut "sqlResultFieldSep" festgelegt werden.

          + + Das Modul stellt optional eine Kommando-Historie zur Verfügung sobald ein SQL-Kommando erfolgreich + ausgeführt wurde. + Um diese Option zu nutzen, ist das Attribut "sqlCmdHistoryLength" mit der gewünschten Listenlänge + zu aktivieren.

          + + Zur besseren Übersicht sind die zur Steuerung von sqlCmd relevanten Attribute hier noch einmal + dargestellt:

          + +
            + + + + + + +
            allowDeletion : aktiviert Löschmöglichkeit
            sqlResultFormat : legt die Darstellung des Kommandoergebnis fest
            sqlResultFieldSep : Auswahl Feldtrenner im Ergebnis
            sqlCmdHistoryLength : Aktivierung Kommando-Historie und deren Umfang
            +
          +
          +
          + + Hinweis:
          + Auch wenn das Modul bezüglich der Datenbankabfrage nichtblockierend arbeitet, kann eine + zu große Ergebnismenge (Anzahl Zeilen bzw. Readings) die Browsersesssion bzw. FHEMWEB + blockieren. Wenn man sich unsicher ist, sollte man vorsorglich dem Statement ein Limit + hinzufügen.

          +
          +
        + +
      • sqlCmdHistory - Wenn mit dem Attribut "sqlCmdHistoryLength" aktiviert, kann + aus einer Liste ein bereits erfolgreich ausgeführtes sqlCmd-Kommando wiederholt werden.
        + Mit Ausführung des letzten Eintrags der Liste, "__purge_historylist__", kann die Liste gelöscht + werden.
        + Falls das Statement "," enthält, wird dieses Zeichen aus technischen Gründen in der + History-Liste als "<c>" dargestellt.
        +

      • + +
      • sqlSpecial - Die Funktion bietet eine Drop-Downliste mit einer Auswahl vorbereiter Auswertungen + an.
        + Das Ergebnis des Statements wird im Reading "SqlResult" dargestellt. + Die Ergebnis-Formatierung kann durch das Attribut "sqlResultFormat" + ausgewählt, sowie der verwendete Feldtrenner durch das Attribut + "sqlResultFieldSep" festgelegt werden.

        + + Die für diese Funktion relevanten Attribute sind:

        +
          + + + + +
          sqlResultFormat : Optionen der Ergebnisformatierung
          sqlResultFieldSep : Auswahl des Trennzeichens zwischen Ergebnisfeldern
          +
        +
        + + Es sind die folgenden vordefinierte Auswertungen auswählbar:

        +
          + + + + + +
          50mostFreqLogsLast2days : ermittelt die 50 am häufigsten vorkommenden Loggingeinträge der letzten 2 Tage
          allDevCount : alle in der Datenbank vorkommenden Devices und deren Anzahl
          allDevReadCount : alle in der Datenbank vorkommenden Device/Reading-Kombinationen und deren Anzahl
          +
        + +


      • + +
      • sumValue [display | writeToDB] + - Berechnet die Summenwerte des Datenbankfelds "VALUE" in den Zeitgrenzen + (Attribute) "timestamp_begin", "timestamp_end" bzw. "timeDiffToNow / timeOlderThan". + Es muss das auszuwertende Reading im Attribut "reading" + angegeben sein. Diese Funktion ist sinnvoll wenn fortlaufend Wertedifferenzen eines + Readings in die Datenbank geschrieben werden.
        + + Ist keine oder die Option "display" angegeben, werden die Ergebnisse nur angezeigt. Mit + der Option "writeToDB" werden die Berechnungsergebnisse mit einem neuen Readingnamen + in der Datenbank gespeichert.
        + Der neue Readingname wird aus einem Präfix und dem originalen Readingnamen gebildet, + wobei der originale Readingname durch das Attribut "readingNameMap" ersetzt werden kann. + Der Präfix setzt sich aus der Bildungsfunktion und der Aggregation zusammen.
        + Der Timestamp der neuen Readings in der Datenbank wird von der eingestellten Aggregationsperiode + abgeleitet, sofern kein eindeutiger Zeitpunkt des Ergebnisses bestimmt werden kann. + Das Feld "EVENT" wird mit "calculated" gefüllt.

        + +
          + Beispiel neuer Readingname gebildet aus dem Originalreading "totalpac":
          + sum_day_totalpac
          + # <Bildungsfunktion>_<Aggregation>_<Originalreading>
          +
          +
        +
        + +
      • syncStandby <DbLog-Device Standby> + - Es werden die Datensätze aus der angeschlossenen Datenbank (Quelle) direkt in eine weitere + Datenbank (Standby-Datenbank) übertragen. + Dabei ist "<DbLog-Device Standby>" das DbLog-Device, welches mit der Standby-Datenbank + verbunden ist.

        + Es werden alle Datensätze übertragen, die durch Timestamp-Attribute + bzw. die Attribute "device", "reading" bestimmt sind.
        + Die Datensätze werden dabei in Zeitscheiben entsprechend der eingestellten Aggregation übertragen. + Hat das Attribut "aggregation" den Wert "no" oder "month", werden die Datensätze automatisch + in Tageszeitscheiben zur Standby-Datenbank übertragen. + Quell- und Standby-Datenbank können unterschiedlichen Typs sein. +

        + + Die zur Steuerung der syncStandby Funktion relevanten Attribute sind:

        + +
          + + + + + + +
          aggregation : Einstellung der Zeitscheiben zur Übertragung (hour,day,week)
          device : Übertragung nur von Datensätzen die <device> enthalten
          reading : Übertragung nur von Datensätzen die <reading> enthalten
          time.* : Attribute zur Zeitabgrenzung der zu übertragenden Datensätze.
          +
        +
        +
        +

      • + +
      • tableCurrentFillup - Die current-Tabelle wird mit einem Extrakt der history-Tabelle aufgefüllt. + Die Attribute zur Zeiteinschränkung bzw. device, reading werden ausgewertet. + Dadurch kann der Inhalt des Extrakts beeinflusst werden. Im zugehörigen DbLog-Device sollte sollte das Attribut + "DbLogType=SampleFill/History" gesetzt sein.

      • + +
      • tableCurrentPurge - löscht den Inhalt der current-Tabelle. Es werden keine Limitierungen, z.B. durch die Attribute "timestamp_begin", + "timestamp_end", device, reading, usw. , ausgewertet.

      • + +
      • vacuum - optimiert die Tabellen in der angeschlossenen Datenbank (SQLite, PostgreSQL).
        + Vor und nach der Optimierung kann ein FHEM-Kommando ausgeführt werden. + (siehe Attribute "executeBeforeProc", "executeAfterProc") +

        + +
          + Hinweis:
          + Obwohl die Funktion selbst non-blocking ausgelegt ist, muß das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEM zu vermeiden.

          + +

        + +
        +
    + + Für alle Auswertungsvarianten (Ausnahme sqlCmd,deviceRename,readingRename) gilt:
    + Zusätzlich zu dem auszuwertenden Reading kann das Device mit angegeben werden um das Reporting nach diesen Kriterien einzuschränken. + Sind keine Zeitgrenzen-Attribute angegeben jedoch das Aggregations-Attribut gesetzt, wird der Zeitstempel des ältesten + Datensatzes in der Datenbank als Startdatum und das aktuelle Datum/die aktuelle Zeit als Zeitgrenze genutzt. + Konnte der älteste Datensatz in der Datenbank nicht ermittelt werden, wird '1970-01-01 01:00:00' als Selektionsstart + genutzt (siehe get <name> minTimestamp). + Sind weder Zeitgrenzen-Attribute noch Aggregation angegeben, wird die Datenselektion ohne Timestamp-Einschränkungen + ausgeführt. +

    + + Hinweis:
    + + In der Detailansicht kann ein Browserrefresh nötig sein um die Operationsergebnisse zu sehen sobald im DeviceOverview "state = done" angezeigt wird. +

    + +
+ + +Get +
    + + Die Get-Kommandos von DbRep dienen dazu eine Reihe von Metadaten der verwendeten Datenbankinstanz abzufragen. + Dies sind zum Beispiel eingestellte Serverparameter, Servervariablen, Datenbankstatus- und Tabelleninformationen. Die verfügbaren get-Funktionen + sind von dem verwendeten Datenbanktyp abhängig. So ist für SQLite z.Zt. nur "svrinfo" verfügbar. Die Funktionen liefern nativ sehr viele Ausgabewerte, + die über über funktionsspezifische Attribute abgrenzbar sind. Der Filter ist als kommaseparierte Liste anzuwenden. + Dabei kann SQL-Wildcard (%) verwendet werden. +

    + + Hinweis:
    + Nach der Ausführung einer get-Funktion in der Detailsicht einen Browserrefresh durchführen um die Ergebnisse zu sehen ! +

    + + +
        +
      • blockinginfo - Listet die aktuell systemweit laufenden Hintergrundprozesse (BlockingCalls) mit ihren Informationen auf. + Zu lange Zeichenketten (z.B. Argumente) werden gekürzt ausgeschrieben. +
      • +

        + +
      • dbstatus - Listet globale Informationen zum MySQL Serverstatus (z.B. Informationen zum Cache, Threads, Bufferpools, etc. ). + Es werden zunächst alle verfügbaren Informationen berichtet. Mit dem Attribut "showStatus" kann die + Ergebnismenge eingeschränkt werden, um nur gewünschte Ergebnisse abzurufen. Detailinformationen zur Bedeutung der einzelnen Readings + sind hier verfügbar.
        + +
          + Bespiel
          + get <name> dbstatus
          + attr <name> showStatus %uptime%,%qcache%
          + # Es werden nur Readings erzeugt die im Namen "uptime" und "qcache" enthalten + +

          +
        + +
      • dbValue <SQL-Statement> - + Führt das angegebene SQL-Statement blockierend aus. Diese Funktion ist durch ihre Arbeitsweise + speziell für den Einsatz in usereigenen Scripten geeignet.
        + Die Eingabe akzeptiert Mehrzeiler und gibt ebenso mehrzeilige Ergebisse zurück. + Werden mehrere Felder selektiert und zurückgegeben, erfolgt die Feldtrennung mit dem Trenner + des Attributes "sqlResultFieldSep" (default "|"). Mehrere Ergebniszeilen + werden mit Newline ("\n") separiert.
        + Diese Funktion setzt/aktualisiert nur Statusreadings, die Funktion im Attribut "userExitFn" + wird nicht aufgerufen. +
        + +
          + Bespiele zur Nutzung im FHEMWEB
          + {fhem("get <name> dbValue select device,count(*) from history where timestamp > '2018-04-01' group by device")}
          + get <name> dbValue select device,count(*) from history where timestamp > '2018-04-01' group by device
          + {CommandGet(undef,"Rep.LogDB1 dbValue select device,count(*) from history where timestamp > '2018-04-01' group by device")}
          +
        + +

        + Erstellt man eine kleine Routine in 99_myUtils, wie z.B.: +
        +
        +sub dbval($$) {
        +  my ($name,$cmd) = @_;
        +  my $ret = CommandGet(undef,"$name dbValue $cmd"); 
        +return $ret;
        +}                            
        +                            
        + kann dbValue vereinfacht verwendet werden mit Aufrufen wie: +

        + +
          + Bespiele
          + {dbval("<name>","select count(*) from history")}
          + oder
          + $ret = dbval("<name>","select count(*) from history");
          +
        + +
      • +

        + + +
      • dbvars - Zeigt die globalen Werte der MySQL Systemvariablen. Enthalten sind zum Beispiel Angaben zum InnoDB-Home, dem Datafile-Pfad, + Memory- und Cache-Parameter, usw. Die Ausgabe listet zunächst alle verfügbaren Informationen auf. Mit dem + Attribut "showVariables" kann die Ergebnismenge eingeschränkt werden um nur gewünschte Ergebnisse + abzurufen. Weitere Informationen zur Bedeutung der ausgegebenen Variablen sind + hier verfügbar.
        + +
          + Bespiel
          + get <name> dbvars
          + attr <name> showVariables %version%,%query_cache%
          + # Es werden nur Readings erzeugt die im Namen "version" und "query_cache" enthalten + +

          +
        + +
      • minTimestamp - Ermittelt den Zeitstempel des ältesten Datensatzes in der Datenbank (wird implizit beim Start von + FHEM ausgeführt). + Der Zeitstempel wird als Selektionsbeginn verwendet wenn kein Zeitattribut den Selektionsbeginn + festlegt. +
      • +

        + +
      • procinfo - Listet die existierenden Datenbank-Prozesse in einer Tabelle auf (nur MySQL).
        + Typischerweise werden nur die Prozesse des Verbindungsusers (angegeben in DbLog-Konfiguration) + ausgegeben. Sollen alle Prozesse angezeigt werden, ist dem User das globale Recht "PROCESS" + einzuräumen.
        + Für bestimmte SQL-Statements wird seit MariaDB 5.3 ein Fortschrittsreporting (Spalte "PROGRESS") + ausgegeben. Zum Beispiel kann der Abarbeitungsgrad bei der Indexerstellung verfolgt werden.
        + Weitere Informationen sind + hier verfügbar.
        +
      • +

        + + +
      • svrinfo - allgemeine Datenbankserver-Informationen wie z.B. die DBMS-Version, Serveradresse und Port usw. Die Menge der Listenelemente + ist vom Datenbanktyp abhängig. Mit dem Attribut "showSvrInfo" kann die Ergebnismenge eingeschränkt werden. + Weitere Erläuterungen zu den gelieferten Informationen sind + hier zu finden.
        + +
          + Bespiel
          + get <name> svrinfo
          + attr <name> showSvrInfo %SQL_CATALOG_TERM%,%NAME%
          + # Es werden nur Readings erzeugt die im Namen "SQL_CATALOG_TERM" und "NAME" enthalten + +

          +
        + +
      • tableinfo - ruft Tabelleninformationen aus der mit dem DbRep-Device verbundenen Datenbank ab (MySQL). + Es werden per default alle in der verbundenen Datenbank angelegten Tabellen ausgewertet. + Mit dem Attribut "showTableInfo" können die Ergebnisse eingeschränkt werden. Erläuterungen zu den erzeugten + Readings sind hier zu finden.
        + +
          + Bespiel
          + get <name> tableinfo
          + attr <name> showTableInfo current,history
          + # Es werden nur Information der Tabellen "current" und "history" angezeigt + +

          +
        + +
        +
    + +
+ + + +Attribute + +
+
    + Über die modulspezifischen Attribute wird die Abgrenzung der Auswertung und die Aggregation der Werte gesteuert.

    + + Hinweis zur SQL-Wildcard Verwendung:
    + Innerhalb der Attribut-Werte für "device" und "reading" kann SQL-Wildcards "%" angegeben werden. + Dabei wird "%" als Platzhalter für beliebig viele Zeichen verwendet. + Das Zeichen "_" wird nicht als SQL-Wildcard supported.
    + Dies gilt für alle Funktionen ausser "insert", "importFromFile" und "deviceRename".
    + Die Funktion "insert" erlaubt nicht, dass die genannten Attribute das Wildcard "%" enthalten. Character "_" wird als normales Zeichen gewertet.
    + In Ergebnis-Readings wird das Wildcardzeichen "%" durch "/" ersetzt um die Regeln für erlaubte Zeichen in Readings einzuhalten. +

    + +
        + +
      • aggregation - Zusammenfassung der Device/Reading-Selektionen in Stunden,Tage,Kalenderwochen,Kalendermonaten + oder "no".
        + Liefert z.B. die Anzahl der DB-Einträge am Tag (countEntries), Summierung von + Differenzwerten eines Readings (sumValue), usw.
        + Mit Aggregation "no" (default) erfolgt keine Zusammenfassung in einem Zeitraum, sondern die + Ausgabe wird aus allen Werten einer Device/Reading-Kombination zwischen den definierten + Zeiträumen ermittelt.

      • + + +
      • allowDeletion - schaltet die Löschfunktion des Moduls frei

      • + + +
      • averageCalcForm - legt die Berechnungsvariante für die Ermittlung des Durchschnittswertes mit "averageValue" + fest.

        + + Zur Zeit sind folgende Varianten implementiert:

        + +
          + + + + + +
          avgArithmeticMean : es wird der arithmetische Mittelwert berechnet (default)
          avgDailyMeanGWS : berechnet die Tagesmitteltemperatur entsprechend den + Vorschriften des deutschen Wetterdienstes (siehe "helpful hints" mit Funktion get versionNotes).
          + Diese Variante verwendet automatisch die Aggregation "day".
          avgTimeWeightMean : berechnet den zeitgewichteten Mittelwert
          +
        +

      • + + +
      • device - Abgrenzung der DB-Selektionen auf ein bestimmtes Device.
        + Es können Geräte-Spezifikationen (devspec) angegeben werden.
        + Innerhalb von Geräte-Spezifikationen wird SQL-Wildcard (%) als normales ASCII-Zeichen gewertet. + Die Devicenamen werden vor der Selektion aus der Geräte-Spezifikationen und den aktuell in FHEM + vorhandenen Devices abgeleitet.

      • + +
          + Beispiele:
          + attr <name> device TYPE=DbRep
          + attr <name> device MySTP_5000
          + attr <name> device SMA.*,MySTP.*
          + attr <name> device SMA_Energymeter,MySTP_5000
          + attr <name> device %5000
          +
        + +
        + + Siehe Geräte-Spezifikationen (devspec). +

        + + +
      • diffAccept - gilt für Funktion diffValue. diffAccept legt fest bis zu welchem Schwellenwert eine berechnete positive Werte-Differenz + zwischen zwei unmittelbar aufeinander folgenden Datensätzen akzeptiert werden soll (Standard ist 20).
        + Damit werden fehlerhafte DB-Einträge mit einem unverhältnismäßig hohen Differenzwert von der Berechnung ausgeschlossen und + verfälschen nicht das Ergebnis. Sollten Schwellenwertüberschreitungen vorkommen, wird das Reading "diff_overrun_limit_<diffLimit>" + erstellt. (<diffLimit> wird dabei durch den aktuellen Attributwert ersetzt) + Es enthält eine Liste der relevanten Wertepaare. Mit verbose 3 werden diese Datensätze ebenfalls im Logfile protokolliert. +

      • + +
          + Beispiel Ausgabe im Logfile beim Überschreiten von diffAccept=10:

          + + DbRep Rep.STP5000.etotal -> data ignored while calc diffValue due to threshold overrun (diffAccept = 10):
          + 2016-04-09 08:50:50 0.0340 -> 2016-04-09 12:42:01 13.3440

          + + # Der erste Datensatz mit einem Wert von 0.0340 ist untypisch gering zum nächsten Wert 13.3440 und führt zu einem zu hohen + Differenzwert.
          + # Es ist zu entscheiden ob der Datensatz gelöscht, ignoriert, oder das Attribut diffAccept angepasst werden sollte. +

        + + +
      • disable - deaktiviert das Modul

      • + + +
      • dumpComment - User-Kommentar. Er wird im Kopf des durch den Befehl "dumpMyQL clientSide" erzeugten Dumpfiles + eingetragen.

      • + +
      • dumpCompress - wenn gesetzt, werden die Dumpfiles nach "dumpMySQL" bzw. "dumpSQLite" komprimiert

      • + + +
      • dumpDirLocal - Zielverzeichnis für die Erstellung von Dumps mit "dumpMySQL clientSide". + default: "{global}{modpath}/log/" auf dem FHEM-Server.
        + Ebenfalls werden in diesem Verzeichnis alte Backup-Files durch die interne Versionsverwaltung von + "dumpMySQL" gesucht und gelöscht wenn die gefundene Anzahl den Attributwert "dumpFilesKeep" + überschreitet. Das Attribut dient auch dazu ein lokal gemountetes Verzeichnis "dumpDirRemote" + DbRep bekannt zu machen.

      • + + +
      • dumpDirRemote - Zielverzeichnis für die Erstellung von Dumps mit "dumpMySQL serverSide". + default: das Home-Dir des MySQL-Servers auf dem MySQL-Host

      • + + +
      • dumpMemlimit - erlaubter Speicherverbrauch für das Dump SQL-Script zur Generierungszeit (default: 100000 Zeichen). + Bitte den Parameter anpassen, falls es zu Speicherengpässen und damit verbundenen Performanceproblemen + kommen sollte.

      • + + +
      • dumpSpeed - Anzahl der abgerufenen Zeilen aus der Quelldatenbank (default: 10000) pro Select durch "dumpMySQL ClientSide". + Dieser Parameter hat direkten Einfluß auf die Laufzeit und den Ressourcenverbrauch zur Laufzeit.

      • + + +
      • dumpFilesKeep - Es wird die angegebene Anzahl Dumpfiles im Dumpdir belassen (default: 3). Sind mehr (ältere) Dumpfiles + vorhanden, werden diese gelöscht nachdem ein neuer Dump erfolgreich erstellt wurde. Das globale + Attribut "archivesort" wird berücksichtigt.

      • + + +
      • executeAfterProc - Es kann ein FHEM-Kommando oder eine Perl-Funktion angegeben werden welche nach der + Befehlsabarbeitung ausgeführt werden soll.
        + Funktionen sind in {} einzuschließen.

        + +
          + Beispiel:

          + attr <name> executeAfterProc set og_gz_westfenster off;
          + attr <name> executeAfterProc {adump ("<name>")}

          + + # "adump" ist eine in 99_myUtils definierte Funktion.
          + +
          +sub adump {
          +    my ($name) = @_;
          +    my $hash = $defs{$name};
          +    # die eigene Funktion, z.B.
          +    Log3($name, 3, "DbRep $name -> Dump ist beendet");
          + 
          +    return;
          +}
          +
          +
        +
      • + + +
      • executeBeforeProc - Es kann ein FHEM-Kommando oder eine Perl-Funktion angegeben werden welche vor der + Befehlsabarbeitung ausgeführt werden soll.
        + Funktionen sind in {} einzuschließen.

        + +
          + Beispiel:

          + attr <name> executeBeforeProc set og_gz_westfenster on;
          + attr <name> executeBeforeProc {bdump ("<name>")}

          + + # "bdump" ist eine in 99_myUtils definierte Funktion.
          + +
          +sub bdump {
          +    my ($name) = @_;
          +    my $hash = $defs{$name};
          +    # die eigene Funktion, z.B.
          +    Log3($name, 3, "DbRep $name -> Dump startet");
          + 
          +    return;
          +}
          +
          +
        +
      • + + +
      • expimpfile - Pfad/Dateiname für Export/Import in/aus einem File.

        + + Der Dateiname kann Platzhalter enthalten die gemäß der nachfolgenden Tabelle ersetzt werden. + Weiterhin können %-wildcards der POSIX strftime-Funktion des darunterliegenden OS enthalten + sein (siehe auch strftime Beschreibung).
        +
        + +
          + + + + + + + + + + + + + +
          %L : wird ersetzt durch den Wert des global logdir Attributs
          %TSB : wird ersetzt durch den (berechneten) Wert des timestamp_begin Attributs
          Allgemein gebräuchliche POSIX-Wildcards sind:
          %d : Tag des Monats (01..31)
          %m : Monat (01..12)
          %Y : Jahr (1970...)
          %w : Wochentag (0..6); beginnend mit Sonntag (0)
          %j : Tag des Jahres (001..366)
          %U : Wochennummer des Jahres, wobei Wochenbeginn = Sonntag (00..53)
          %W : Wochennummer des Jahres, wobei Wochenbeginn = Montag (00..53)
          +
        +

      • + +
          + Beispiele:
          + attr <name> expimpfile /sds1/backup/exptest_%TSB.csv
          + attr <name> expimpfile /sds1/backup/exptest_%Y-%m-%d.csv
          +
        +
        + + + Zur POSIX Wildcardverwendung siehe auch die Erläuterungen zu Filelog. +

        + + +
      • fetchMarkDuplicates + - Markierung von mehrfach vorkommenden Datensätzen im Ergebnis des "fetchrows" Kommandos

      • + + +
      • fetchRoute [descent | ascent] - bestimmt die Leserichtung des fetchrows-Befehl.

        +
          + descent - die Datensätze werden absteigend gelesen (default). Wird + die durch das Attribut "limit" festgelegte Anzahl der Datensätze + überschritten, werden die neuesten x Datensätze angezeigt.

          + ascent - die Datensätze werden aufsteigend gelesen. Wird + die durch das Attribut "limit" festgelegte Anzahl der Datensätze + überschritten, werden die ältesten x Datensätze angezeigt.
          +
        + +


      • + + +
      • ftpUse - FTP Transfer nach einem Dump wird eingeschaltet (ohne SSL Verschlüsselung). Das erzeugte + Datenbank Backupfile wird non-blocking zum angegebenen FTP-Server (Attribut "ftpServer") + übertragen.

      • + + +
      • ftpUseSSL - FTP Transfer mit SSL Verschlüsselung nach einem Dump wird eingeschaltet. Das erzeugte + Datenbank Backupfile wird non-blocking zum angegebenen FTP-Server (Attribut "ftpServer") + übertragen.

      • + + +
      • ftpUser - User zur Anmeldung am FTP-Server nach einem Dump, default: "anonymous".

      • + + +
      • ftpDebug - Debugging der FTP Kommunikation zur Fehlersuche.

      • + + +
      • ftpDir - Verzeichnis des FTP-Servers in welches das File nach einem Dump übertragen werden soll + (default: "/").

      • + + +
      • ftpDumpFilesKeep - Es wird die angegebene Anzahl Dumpfiles im <ftpDir> belassen (default: 3). Sind mehr + (ältere) Dumpfiles vorhanden, werden diese gelöscht nachdem ein neuer Dump erfolgreich + übertragen wurde.

      • + + +
      • ftpPassive - setzen wenn passives FTP verwendet werden soll

      • + + +
      • ftpPort - FTP-Port, default: 21

      • + + +
      • ftpPwd - Passwort des FTP-Users, default nicht gesetzt

      • + + +
      • ftpServer - Name oder IP-Adresse des FTP-Servers zur Übertragung von Files nach einem Dump.

      • + + +
      • ftpTimeout - Timeout für eine FTP-Verbindung in Sekunden (default: 30).

      • + + +
      • limit - begrenzt die Anzahl der resultierenden Datensätze im select-Statement von "fetchrows", bzw. der anzuzeigenden Datensätze + der Kommandos "delSeqDoublets adviceDelete", "delSeqDoublets adviceRemain" (default 1000). + Diese Limitierung soll eine Überlastung der Browsersession und ein + blockieren von FHEMWEB verhindern. Bei Bedarf entsprechend ändern bzw. die + Selektionskriterien (Zeitraum der Auswertung) anpassen.

      • + + +
      • optimizeTablesBeforeDump - wenn "1", wird vor dem Datenbankdump eine Tabellenoptimierung ausgeführt (default: 0). + Dadurch verlängert sich die Laufzeit des Dump.

        +
          + Hinweis
          + Die Tabellenoptimierung führt zur Sperrung der Tabellen und damit zur Blockierung von + FHEM falls DbLog nicht im asynchronen Modus (DbLog-Attribut "asyncMode") betrieben wird ! +
          +
        +

      • + + +
      • reading - Abgrenzung der DB-Selektionen auf ein bestimmtes oder mehrere Readings. + Mehrere Readings werden als Komma separierte Liste angegeben.
        + SQL Wildcard (%) wird in einer Liste als normales ASCII-Zeichen gewertet.
        +

      • + +
          + Beispiele:
          + attr <name> reading etotal
          + attr <name> reading et%
          + attr <name> reading etotal,etoday
          +
        +

        + + +
      • readingNameMap - der Name des ausgewerteten Readings wird mit diesem String für die Anzeige überschrieben

      • + + +
      • readingPreventFromDel - Komma separierte Liste von Readings die vor einer neuen Operation nicht gelöscht + werden sollen

      • + + +
      • role - die Rolle des DbRep-Device. Standard ist "Client". Die Rolle "Agent" ist im Abschnitt + "DbRep-Agent" beschrieben.
        + + + Siehe auch Abschnitt DbRep-Agent. +

      • + + +
      • seqDoubletsVariance - akzeptierte Abweichung (+/-) für das Kommando "set <name> delSeqDoublets".
        + Der Wert des Attributs beschreibt die Abweichung bis zu der aufeinanderfolgende numerische + Werte (VALUE) von Datensätze als gleich angesehen und gelöscht werden sollen. + "seqDoubletsVariance" ist ein absoluter Zahlenwert, + der sowohl als positive als auch negative Abweichung verwendet wird.

      • + +
          + Beispiele:
          + attr <name> seqDoubletsVariance 0.0014
          + attr <name> seqDoubletsVariance 1.45
          +
        +

        + + +
      • showproctime - wenn gesetzt, zeigt das Reading "sql_processing_time" die benötigte Abarbeitungszeit (in Sekunden) + für die SQL-Ausführung der durchgeführten Funktion. Dabei wird nicht ein einzelnes + SQl-Statement, sondern die Summe aller notwendigen SQL-Abfragen innerhalb der jeweiligen + Funktion betrachtet.

      • + + +
      • showStatus - grenzt die Ergebnismenge des Befehls "get <name> dbstatus" ein. Es können SQL-Wildcard (%) verwendet werden.

      • + +
          + Bespiel:
          + attr <name> showStatus %uptime%,%qcache%
          + # Es werden nur Readings erzeugt die im Namen "uptime" und "qcache" enthalten
          +

        + + +
      • showVariables - grenzt die Ergebnismenge des Befehls "get <name> dbvars" ein. Es können SQL-Wildcard (%) verwendet werden.

      • + +
          + Bespiel:
          + attr <name> showVariables %version%,%query_cache%
          + # Es werden nur Readings erzeugt die im Namen "version" und "query_cache" enthalten
          +

        + + +
      • showSvrInfo - grenzt die Ergebnismenge des Befehls "get <name> svrinfo" ein. Es können SQL-Wildcard (%) verwendet werden.

      • + +
          + Bespiel:
          + attr <name> showSvrInfo %SQL_CATALOG_TERM%,%NAME%
          + # Es werden nur Readings erzeugt die im Namen "SQL_CATALOG_TERM" und "NAME" enthalten
          +

        + + +
      • showTableInfo - grenzt die Ergebnismenge des Befehls "get <name> tableinfo" ein. Es können SQL-Wildcard (%) verwendet werden.

      • + +
          + Bespiel:
          + attr <name> showTableInfo current,history
          + # Es werden nur Information der Tabellen "current" und "history" angezeigt
          +

        + + +
      • sqlResultFieldSep - legt den verwendeten Feldseparator (default: "|") im Ergebnis des Kommandos + "set ... sqlCmd" fest.

      • + + +
      • sqlCmdHistoryLength + - aktiviert die Kommandohistorie von "sqlCmd" und legt deren Länge fest

      • + + +
      • sqlResultFormat - legt die Formatierung des Ergebnisses des Kommandos "set <name> sqlCmd" fest. + Mögliche Optionen sind:

        + +
          + separated - die Ergebniszeilen werden als einzelne Readings fortlaufend + generiert. (default)

          + mline - das Ergebnis wird als Mehrzeiler im Reading + SqlResult dargestellt.

          + sline - das Ergebnis wird als Singleline im Reading + SqlResult dargestellt. Satztrenner ist"]|[".

          + table - das Ergebnis wird als Tabelle im Reading + SqlResult dargestellt.

          + json - erzeugt das Reading SqlResult als + JSON-kodierten Hash. + Jedes Hash-Element (Ergebnissatz) setzt sich aus der laufenden Nummer + des Datensatzes (Key) und dessen Wert zusammen.

          + + Die Weiterverarbeitung des Ergebnisses kann z.B. mit der folgenden userExitFn in 99_myUtils.pm erfolgen:
          +
          +        sub resfromjson {
          +          my ($name,$reading,$value) = @_;
          +          my $hash   = $defs{$name};
          +
          +          if ($reading eq "SqlResult") {
          +            # nur Reading SqlResult enthält JSON-kodierte Daten
          +            my $data = decode_json($value);
          +	      
          +		    foreach my $k (keys(%$data)) {
          +		      
          +			  # ab hier eigene Verarbeitung für jedes Hash-Element 
          +		      # z.B. Ausgabe jedes Element welches "Cam" enthält
          +		      my $ke = $data->{$k};
          +		      if($ke =~ m/Cam/i) {
          +		        my ($res1,$res2) = split("\\|", $ke);
          +                Log3($name, 1, "$name - extract element $k by userExitFn: ".$res1." ".$res2);
          +		      }
          +	        }
          +          }
          +        return;
          +        }
          +  	    
          + +

        + + +
      • timeYearPeriod - Mit Hilfe dieses Attributes wird eine jährliche Zeitperiode für die Datenbankselektion bestimmt. + Die Zeitgrenzen werden zur Ausführungszeit dynamisch berechnet. Es wird immer eine Jahresperiode + bestimmt. Eine unterjährige Angabe ist nicht möglich.
        + Dieses Attribut ist vor allem dazu gedacht Auswertungen synchron zu einer Abrechnungsperiode, z.B. der eines + Energie- oder Gaslieferanten, anzufertigen. +

      • + +
          + Beispiel:

          + attr <name> timeYearPeriod 06-25 06-24

          + + # wertet die Datenbank in den Zeitgrenzen 25. Juni AAAA bis 24. Juni BBBB aus.
          + # Das Jahr AAAA bzw. BBBB wird in Abhängigkeit des aktuellen Datums errechnet.
          + # Ist das aktuelle Datum >= 25. Juni und =< 31. Dezember, dann ist AAAA = aktuelles Jahr und BBBB = aktuelles Jahr+1
          + # Ist das aktuelle Datum >= 01. Januar und =< 24. Juni, dann ist AAAA = aktuelles Jahr-1 und BBBB = aktuelles Jahr +
        +

        + + +
      • timestamp_begin - der zeitliche Beginn für die Datenselektion

      • + + Das Format von Timestamp ist "YYYY-MM-DD HH:MM:SS". Für die Attribute "timestamp_begin", "timestamp_end" + kann ebenso eine der folgenden Eingaben verwendet werden. Dabei wird das timestamp-Attribut dynamisch belegt:

        +
          + current_year_begin : entspricht "<aktuelles Jahr>-01-01 00:00:00"
          + current_year_end : entspricht "<aktuelles Jahr>-12-31 23:59:59"
          + previous_year_begin : entspricht "<vorheriges Jahr>-01-01 00:00:00"
          + previous_year_end : entspricht "<vorheriges Jahr>-12-31 23:59:59"
          + current_month_begin : entspricht "<aktueller Monat erster Tag> 00:00:00"
          + current_month_end : entspricht "<aktueller Monat letzter Tag> 23:59:59"
          + previous_month_begin : entspricht "<Vormonat erster Tag> 00:00:00"
          + previous_month_end : entspricht "<Vormonat letzter Tag> 23:59:59"
          + current_week_begin : entspricht "<erster Tag der akt. Woche> 00:00:00"
          + current_week_end : entspricht "<letzter Tag der akt. Woche> 23:59:59"
          + previous_week_begin : entspricht "<erster Tag Vorwoche> 00:00:00"
          + previous_week_end : entspricht "<letzter Tag Vorwoche> 23:59:59"
          + current_day_begin : entspricht "<aktueller Tag> 00:00:00"
          + current_day_end : entspricht "<aktueller Tag> 23:59:59"
          + previous_day_begin : entspricht "<Vortag> 00:00:00"
          + previous_day_end : entspricht "<Vortag> 23:59:59"
          + current_hour_begin : entspricht "<aktuelle Stunde>:00:00"
          + current_hour_end : entspricht "<aktuelle Stunde>:59:59"
          + previous_hour_begin : entspricht "<vorherige Stunde>:00:00"
          + previous_hour_end : entspricht "<vorherige Stunde>:59:59"
          +

        + + +
      • timestamp_end - das zeitliche Ende für die Datenselektion. Wenn nicht gesetzt wird immer die aktuelle + Datum/Zeit-Kombi für das Ende der Selektion eingesetzt.

      • + + Das Format von Timestamp ist "YYYY-MM-DD HH:MM:SS". Für die Attribute "timestamp_begin", "timestamp_end" + kann ebenso eine der folgenden Eingaben verwendet werden. Dabei wird das timestamp-Attribut dynamisch belegt:

        +
          + current_year_begin : entspricht "<aktuelles Jahr>-01-01 00:00:00"
          + current_year_end : entspricht "<aktuelles Jahr>-12-31 23:59:59"
          + previous_year_begin : entspricht "<vorheriges Jahr>-01-01 00:00:00"
          + previous_year_end : entspricht "<vorheriges Jahr>-12-31 23:59:59"
          + current_month_begin : entspricht "<aktueller Monat erster Tag> 00:00:00"
          + current_month_end : entspricht "<aktueller Monat letzter Tag> 23:59:59"
          + previous_month_begin : entspricht "<Vormonat erster Tag> 00:00:00"
          + previous_month_end : entspricht "<Vormonat letzter Tag> 23:59:59"
          + current_week_begin : entspricht "<erster Tag der akt. Woche> 00:00:00"
          + current_week_end : entspricht "<letzter Tag der akt. Woche> 23:59:59"
          + previous_week_begin : entspricht "<erster Tag Vorwoche> 00:00:00"
          + previous_week_end : entspricht "<letzter Tag Vorwoche> 23:59:59"
          + current_day_begin : entspricht "<aktueller Tag> 00:00:00"
          + current_day_end : entspricht "<aktueller Tag> 23:59:59"
          + previous_day_begin : entspricht "<Vortag> 00:00:00"
          + previous_day_end : entspricht "<Vortag> 23:59:59"
          + current_hour_begin : entspricht "<aktuelle Stunde>:00:00"
          + current_hour_end : entspricht "<aktuelle Stunde>:59:59"
          + previous_hour_begin : entspricht "<vorherige Stunde>:00:00"
          + previous_hour_end : entspricht "<vorherige Stunde>:59:59"
          +

        + + Natürlich sollte man immer darauf achten dass "timestamp_begin" < "timestamp_end" ist.

        + +
          + Beispiel:

          + attr <name> timestamp_begin current_year_begin
          + attr <name> timestamp_end current_year_end

          + + # Wertet die Datenbank in den Zeitgrenzen des aktuellen Jahres aus.
          +
        +

        + + Hinweis
        + + Wird das Attribut "timeDiffToNow" gesetzt, werden die eventuell gesetzten anderen Zeit-Attribute + ("timestamp_begin","timestamp_end","timeYearPeriod") gelöscht. + Das Setzen von "timestamp_begin" bzw. "timestamp_end" bedingt die Löschung von anderen Zeit-Attribute falls sie vorher + gesetzt waren. +

        + + +
      • timeDiffToNow - der Selektionsbeginn wird auf den Zeitpunkt "<aktuelle Zeit> - <timeDiffToNow>" + gesetzt (z.b. werden die letzten 24 Stunden in die Selektion eingehen wenn das Attribut auf "86400" gesetzt + wurde). Die Timestampermittlung erfolgt dynamisch zum Ausführungszeitpunkt.

      • + +
          + Eingabeformat Beispiel:
          + attr <name> timeDiffToNow 86400
          + # die Startzeit wird auf "aktuelle Zeit - 86400 Sekunden" gesetzt
          + attr <name> timeDiffToNow d:2 h:3 m:2 s:10
          + # die Startzeit wird auf "aktuelle Zeit - 2 Tage 3 Stunden 2 Minuten 10 Sekunden" gesetzt
          + attr <name> timeDiffToNow m:600
          + # die Startzeit wird auf "aktuelle Zeit - 600 Minuten" gesetzt
          + attr <name> timeDiffToNow h:2.5
          + # die Startzeit wird auf "aktuelle Zeit - 2,5 Stunden" gesetzt
          + attr <name> timeDiffToNow y:1 h:2.5
          + # die Startzeit wird auf "aktuelle Zeit - 1 Jahr und 2,5 Stunden" gesetzt
          + attr <name> timeDiffToNow y:1.5
          + # die Startzeit wird auf "aktuelle Zeit - 1,5 Jahre gesetzt
          +
        +
        + + Sind die Attribute "timeDiffToNow" und "timeOlderThan" gleichzeitig gesetzt, wird der + Selektionszeitraum zwischen diesen Zeitpunkten dynamisch kalkuliert. +

        + + +
      • timeOlderThan - das Selektionsende wird auf den Zeitpunkt "<aktuelle Zeit> - <timeOlderThan>" + gesetzt. Dadurch werden alle Datensätze bis zu dem Zeitpunkt "<aktuelle + Zeit> - <timeOlderThan>" berücksichtigt (z.b. wenn auf 86400 gesetzt, werden alle + Datensätze die älter als ein Tag sind berücksichtigt). Die Timestampermittlung erfolgt + dynamisch zum Ausführungszeitpunkt.

      • + +
          + Eingabeformat Beispiel:
          + attr <name> timeOlderThan 86400
          + # das Selektionsende wird auf "aktuelle Zeit - 86400 Sekunden" gesetzt
          + attr <name> timeOlderThan d:2 h:3 m:2 s:10
          + # das Selektionsende wird auf "aktuelle Zeit - 2 Tage 3 Stunden 2 Minuten 10 Sekunden" gesetzt
          + attr <name> timeOlderThan m:600
          + # das Selektionsende wird auf "aktuelle Zeit - 600 Minuten" gesetzt
          + attr <name> timeOlderThan h:2.5
          + # das Selektionsende wird auf "aktuelle Zeit - 2,5 Stunden" gesetzt
          + attr <name> timeOlderThan y:1 h:2.5
          + # das Selektionsende wird auf "aktuelle Zeit - 1 Jahr und 2,5 Stunden" gesetzt
          + attr <name> timeOlderThan y:1.5
          + # das Selektionsende wird auf "aktuelle Zeit - 1,5 Jahre gesetzt
          +
        +
        + + Sind die Attribute "timeDiffToNow" und "timeOlderThan" gleichzeitig gesetzt, wird der + Selektionszeitraum zwischen diesen Zeitpunkten dynamisch kalkuliert. +

        + + +
      • timeout - das Attribut setzt den Timeout-Wert für die Blocking-Call Routinen in Sekunden + (Default: 86400)

      • + + +
      • userExitFn - stellt eine Schnittstelle zur Ausführung eigenen Usercodes zur Verfügung.
        + Um die Schnittstelle zu aktivieren, wird zunächst die aufzurufende Subroutine in + 99_myUtls.pm nach folgendem Muster erstellt:
        + +
        +        sub UserFunction {
        +          my ($name,$reading,$value) = @_;
        +          my $hash = $defs{$name};
        +          ...
        +          # z.B. übergebene Daten loggen
        +          Log3 $name, 1, "UserExitFn $name called - transfer parameter are Reading: $reading, Value: $value " ;
        +          ...
        +        return;
        +        }
        +  	    
        + + Die Aktivierung der Schnittstelle erfogt durch Setzen des Funktionsnamens im Attribut. + Optional kann ein Reading:Value Regex als Argument angegeben werden. Wird kein Regex + angegeben, werden alle Wertekombinationen als "wahr" gewertet (entspricht .*:.*). +

        + +
          + Beispiel:
          + attr userExitFn UserFunction .*:.*
          + # "UserFunction" ist die Subroutine in 99_myUtils.pm. +
        +
        + + Grundsätzlich arbeitet die Schnittstelle OHNE Eventgenerierung bzw. benötigt zur Funktion keinen + Event. Sofern das Attribut gesetzt ist, erfolgt Die Regexprüfung NACH der Erstellung eines + Readings. Ist die Prüfung WAHR, wird die angegebene Funktion aufgerufen. + Zur Weiterverarbeitung werden der aufgerufenenen Funktion folgende Variablen übergeben:

        + +
          +
        • $name - der Name des DbRep-Devices
        • +
        • $reading - der Namen des erstellen Readings
        • +
        • $value - der Wert des Readings
        • + +
        +
      • +
        +
        + + +
      • valueFilter - Regulärer Ausdruck zur Filterung von Datensätzen innerhalb bestimmter Funktionen. Der + Regex auf den gesamten selektierten Datensatz (inkl. Device, Reading usw.) angewendet. + Bitte vergleichen sie die Erläuterungen zu den entsprechenden Set-Kommandos.

      • + + +
    +
+ + +Readings + +
+
    + Abhängig von der ausgeführten DB-Operation werden die Ergebnisse in entsprechenden Readings dargestellt. Zu Beginn einer neuen Operation + werden alle alten Readings einer vorangegangenen Operation gelöscht um den Verbleib unpassender bzw. ungültiger Readings zu vermeiden. +

    + + Zusätzlich werden folgende Readings erzeugt (Auswahl):

    + +
        +
      • state - enthält den aktuellen Status der Auswertung. Wenn Warnungen auftraten (state = Warning) vergleiche Readings + "diff_overrun_limit_<diffLimit>" und "less_data_in_period"

      • + +
      • errortext - Grund eines Fehlerstatus

      • + +
      • background_processing_time - die gesamte Prozesszeit die im Hintergrund/Blockingcall verbraucht wird

      • + +
      • diff_overrun_limit_<diffLimit> - enthält eine Liste der Wertepaare die eine durch das Attribut "diffAccept" festgelegte Differenz + <diffLimit> (Standard: 20) überschreiten. Gilt für Funktion "diffValue".

      • + +
      • less_data_in_period - enthält eine Liste der Zeitperioden in denen nur ein einziger Datensatz gefunden wurde. Die + Differenzberechnung berücksichtigt den letzten Wert der Vorperiode. Gilt für Funktion "diffValue".

      • + +
      • sql_processing_time - der Anteil der Prozesszeit die für alle SQL-Statements der ausgeführten + Operation verbraucht wird

      • + +
      • SqlResult - Ergebnis des letzten sqlCmd-Kommandos. Die Formatierung erfolgt entsprechend + des Attributes "sqlResultFormat"

      • + +
      • sqlCmd - das letzte ausgeführte sqlCmd-Kommando

      • +
    +
    + +
+ + +DbRep Agent - automatisches Ändern von Device-Namen in Datenbanken und DbRep-Definitionen nach FHEM "rename" Kommando + +
+
    + Mit dem Attribut "role" wird die Rolle des DbRep-Device festgelegt. Die Standardrolle ist "Client". Mit der Änderung der Rolle in "Agent" wird das Device + veranlasst auf Umbenennungen von Geräten in der FHEM Installation zu reagieren.

    + + Durch den DbRep-Agenten werden folgende Features aktiviert wenn ein Gerät in FHEM mit "rename" umbenannt wird:

    + +
        +
      • in der dem DbRep-Agenten zugeordneten Datenbank (Internal Database) wird nach Datensätzen mit dem alten Gerätenamen gesucht und dieser Gerätename in + allen betroffenen Datensätzen in den neuen Namen geändert.

      • + +
      • in dem DbRep-Agenten zugeordneten DbLog-Device wird in der Definition das alte durch das umbenannte Device ersetzt. Dadurch erfolgt ein weiteres Logging + des umbenannten Device in der Datenbank.

      • + +
      • in den existierenden DbRep-Definitionen vom Typ "Client" wird ein evtl. gesetztes Attribut "device = alter Devicename" in "device = neuer Devicename" + geändert. Dadurch werden Auswertungsdefinitionen bei Geräteumbenennungen automatisch konstistent gehalten.

      • + +
    + + Mit der Änderung in einen Agenten sind folgende Restriktionen verbunden die mit dem Setzen des Attributes "role = Agent" eingeschaltet + und geprüft werden:

    + +
        +
      • es kann nur einen Agenten pro Datenbank in der FHEM-Installation geben. Ist mehr als eine Datenbank mit DbLog definiert, können + ebenso viele DbRep-Agenten eingerichtet werden

      • + +
      • mit der Umwandlung in einen Agenten wird nur noch das Set-Komando "renameDevice" verfügbar sein sowie nur ein eingeschränkter Satz von DbRep-spezifischen + Attributen zugelassen. Wird ein DbRep-Device vom bisherigen Typ "Client" in einen Agenten geändert, werden evtl. gesetzte und nun nicht mehr zugelassene + Attribute glöscht.

      • + +
    + + Die Aktivitäten wie Datenbankänderungen bzw. Änderungen an anderen DbRep-Definitionen werden im Logfile mit verbose=3 protokolliert. Damit die renameDevice-Funktion + bei großen Datenbanken nicht in ein timeout läuft, sollte das Attribut "timeout" entsprechend dimensioniert werden. Wie alle Datenbankoperationen des Moduls + wird auch das Autorename nonblocking ausgeführt.

    + +
      + Beispiel für die Definition eines DbRep-Device als Agent:

      + + define Rep.Agent DbRep LogDB
      + attr Rep.Agent devStateIcon connected:10px-kreis-gelb .*disconnect:10px-kreis-rot .*done:10px-kreis-gruen
      + attr Rep.Agent icon security
      + attr Rep.Agent role Agent
      + attr Rep.Agent room DbLog
      + attr Rep.Agent showproctime 1
      + attr Rep.Agent stateFormat { ReadingsVal("$name","state", undef) eq "running" ? "renaming" : ReadingsVal("$name","state", undef). " »; ProcTime: ".ReadingsVal("$name","sql_processing_time", undef)." sec"}
      + attr Rep.Agent timeout 86400
      +
      +
      +
    + + Hinweis:
    + Obwohl die Funktion selbst non-blocking ausgelegt ist, sollte das zugeordnete DbLog-Device + im asynchronen Modus betrieben werden um ein Blockieren von FHEMWEB zu vermeiden (Tabellen-Lock).

    + +
+ +=end html_DE +=cu \ No newline at end of file