Visit the new site

Automate Your Tally Reports

Streamline your Reports export with one-click automation.

Get a Demo

Wednesday, 15 January 2025

Mobile app developers/analytic app developers-optimizing the incremental load from Tally-reduce the time by 99 percent through datawarehouse

 Problem statement: App developers have synced the initial data from Tally. Now incremental data is to be captured.  Here we discuss the approaches for transactions and masters for the same. We would also look at how the datawarehouse can help reduce the incremental times by more than 99%.


Load through odbc: in case the application works through odbc then one would need to do a full extract each time since the odbc represents a full snapshot. There are more than 160 odbc extracts in Tally. Hence the process of incremental extract would need to be handled in the following manner:

a.) Nightly load

b.) Load over the fortnight.



Load through xml: Many developers like to sync the reports directly through the xml. Complete voucher extraction could put in a lot of load on the system. Hence it is recommended that the extraction of the voucher be done on a day by day basis(https://mndatasolutionsindia.blogspot.com/2025/01/reducing-time-for-mobile-refresh-tally.html).

Difference between ledger vouchers and voucher register: It is to be noted for the developer that the complete sales movement is captured in the ledger vouchers(drill down from the sales accounts in Tally to ledger vouchers). Voucher register gives movement by voucher types[including inventory movements] and hence it could exclude the journal movements.


 



 




Ledgervouchers is a parameterized report by svcurrentcompany, svfromdate and svtodate.

 

Voucher register is movement by vouchertypes only[including inventory]. The sales numbers in the voucherregister might not match the voucherledger movement(all required movement types).

 

 

Alternatives for the developer:

 a.) Extract the profit and loss report by using the following xml-sample has repeatvariables also.

Thereafter based on the year month selected drill down to the required voucher based on lledgerprimary. Alternatively based on the svfromdate and svtodate and company name the request can be sent to Tally to retrive the required sales vouchers:

sample xml code added.


Here ledgername is the input parameter.



Bottleneck: to extract the voucher (from which voucherledger report is extracted) one could need to extract the same on a day by day basis so that Tally does not hang. The same process of synchronization could take a lot of time.

 

Process of reduction of execution time:

we recommend using a datawarehouse and using the event based activities in Tally to capture the changes in transaction and masters(https://mndatasolutionsindia.blogspot.com/2025/01/reducing-time-for-mobile-refresh-tally.html). This will capture the changes as the entries into the voucher or the entries into the masters are happening.


Note: RAM of atleast 16 GB is required.

Internet connection needs to be always on[else synchronization could not happen]

 

Concurrancy impact would need to checked. If there are a lot fo developers then we need to check the load on the datawarehouse.

The onsite cost of hosting the data onto the database also needs to be checked.

Our recommendation would be to load the data onto a desktop tool like Powerbi or desktop database like sql server and then sync with the mobile app.

 

 

 


Can event based activities be used with xml export: as far as we know the same cannot be done with the usage of embedded tdl code.


Contact us for a demo.


Name: apoorv chaturvedi

email:support@mndatasolutions.com;support@turbodataotol.com

Phone:+91-8802466356

 

youtube channel:www.youtube.com\@mndatassolutions2042

 




Tuesday, 14 January 2025

Reducing the time for Mobile refresh Tally)-discussion on approaches

 Problem statement: Tally is a hierarchical database with a desktop application. In order to refresh the mobile apps one needs to sync the data with a nosql or relational database. The issue is how to do the incremental refresh.


Using the standard Tally XML export: Tally can export any report in the xml format. For example the Daybook can be exported using the following xml code.

The given xml request assumes that Tally is been used as a webserver 


<ENVELOPE>


<HEADER>


<VERSION>1</VERSION>


<TALLYREQUEST>Export</TALLYREQUEST>


<TYPE>Data</TYPE>


<ID>Daybook</ID>


</HEADER>


<BODY>


<DESC>


<STATICVARIABLES>


<EXPLODEFLAG>Yes</EXPLODEFLAG>


<SVEXPORTFORMAT>$$SysName:XML</SVEXPORTFORMAT>



</STATICVARIABLES>


</DESC>


</BODY>


</ENVELOPE>



The output of the same shall give a complete list of masters and transactions.

However there could be a need to only get transactions in the load. In such a scenario the following set of sample collections export could be used.

<ENVELOPE>


<HEADER>


<VERSION>1</VERSION>


<TALLYREQUEST>Export</TALLYREQUEST>


<TYPE>Collection</TYPE>


<ID>salesVouchers</ID>


</HEADER>


<BODY>


<DESC>


<TDL>

<TDLMESSAGE>

 <COLLECTION NAME="salesVouchers"IsModify="No">

<TYPE>Voucher</TYPE>

<FETCH>*.*</FETCH>

<COMPUTE>FOREXTRANSACTION_rohit:$$FOREXVALUE:$AMOUNT</COMPUTE>

<COMPUTE>Ledgeramount:$$COLLECTIONFIELD:$AMOUNT:1:ALLLEDGERENTRIES</COMPUTE>

<COMPUTE>HsnIgstRate_ledger:If $$IsEmpty:$.GSTOVRDNRate Then $$GetInfoFromHierarchy:$ledgername:"ledger":"GSTRate":$date:"GST":0:@@IGSTDutyHead Else $.GSTOVRDNRate</COMPUTE>

<SYSTEM Type="Formulae" NAME="opfilter">$$ISSALES:$VOUCHERTYPENAME</SYSTEM>

</COLLECTION>



</TDLMESSAGE>


</TDL>


</DESC>


</BODY>


</ENVELOPE>

In this scenario only the sales vouchers are been extracted based on the filter parameters for the vouchers. No master details have been extracted for the same.
However the complete extraction of the sales vouchers could result in a heavy load on the transaction system. In such a scenario it is best to extract the vouchers on a day by day basis beginning with the DSPFromdate and ending with the DSPTodate. The data from each voucher entry shall then be stored in the database.

Finding if the voucher is insert, upddate or delete. The same can be done through the field Remoteid in the xml export as given below:
VOUCHER REMOTEID="13excel". 

Note: all xml exports work for only the current company that has been opened in the xml voucher. If there are multiple companies then the xml export shall need to be done for each company individually.

Heavy load on the system: in case the load on the system is heavy then one could extract the voucher on a date by date basis using the SVFROMDATE and SVTODATE parameters.



<ENVELOPE>

<HEADER>

<VERSION>1</VERSION>

<TALLYREQUEST>Export</TALLYREQUEST>

<TYPE>Collection</TYPE>

<ID>VOUCHERINCREMENTAL</ID>

</HEADER>

<BODY>

<DESC>

<TDL>
<TDLMESSAGE>
 <COLLECTION NAME="VOUCHERINCREMENTAL" ISMODIFY="No" ISFIXED="No" ISINITIALIZE="No" ISOPTION="No" ISINTERNAL="No">
    <TYPE>VOUCHER</TYPE>
    <NATIVEMETHOD>*.*</NATIVEMETHOD>
    <PARMVAR>svfromdate: Date:$$Date:@@machinedate-300</PARMVAR>
    <PARMVAR>svtodate:DATE: $$Date:@@machinedate-200</PARMVAR>
   </COLLECTION>


</TDLMESSAGE>

</TDL>

</DESC>

</BODY>

</ENVELOPE>


In the above TDL code the PARMVAR for from and todate can be set to capture the data for the voucher incrementally.

Masters incremental extraction XML: In the case of masters there is little option in terms of incremental extraction using XML. One needs to sync the entire masters at one go. The same could take time.


New approach: for the new approach, the M&N BI developers looked at the following parameters for initial and incremental extraction.

a.) Unlimited companies: we desired to use the loopcollobj to extract the data from unlimited companies
b.) Lighter data extract: we decided to use json[used in a standard fashion for api extract]
c.) More complete extract than done by other SaaS providers.
d.) Lower load on the system
e.) Faster extract for the transactions and the masters: we decided to extract the data on a real time basis[as the voucher is entered into the system]. We desired to use the event based activities within Tally specifically ON form accept and on form delete.

f.) Updation of the masters as the voucher is entered or the masters are updated/inserted.
g.) Reducing the data load transferred from the system to the cloud.


For the same we developed custom collections to extract data from unlimited companies using the Tally function loopcollobj.

Reduce the load on Tally system while extracting the voucher data: In order to reduce the load on Tally during voucher extract the team took the following actions:
a.) Broke the voucher into child components.

Say









The same is extracted as follows:


Similarly voucher ledger details are extracted as follows;








After the extract was done in json files, the details were joined using sql joins using the composite keys companyname, masterid and so forth.


b.) Converting the heirarchial functions to SQL joins: The sql joins are atleast 70% faster than the heirarchial functions in terms of speed and consume atleast 50% less memory than the functions. The same concept was used in the database level and the Powerbi module to speed up the querying process.



Reducing the time to extract: Our team developed a datawarehouse and thereby captured the incremental data load using the form accept events for masters and vouchers in Tally. This would further speed up the extraction process.


Sample code:

For example for the masters the incremental data load logic could be as follows:

[Collection: Group_Crash]

Type: Group

Filter:MASTER_FILTER_Groupfilter

Fetch : Parent,ParentHierarchy,Masterid,Alterid,Isrevenue,Affectsstock,AffectsGrossprofit,Reservedname,Reservename+

,Issubledger,Istdsapplicable,Istcsapplicable,isbillwiseone,iscoscentreson,Istdsexpense,istcsapplicable,Isdeemedpositive


Compute :Area :$_Area

Compute :PrimaryGroup :$_PrimaryGroup

Compute :CompanyName :##SVCurrentCompany

Compute: Groupname: $Name


;select $Name,$Parent,$Area,$ParentHierarchy,$PrimaryGroup,$Masterid,$Alterid from Group_crash

[#SYSTEM:FORMULAE]


MASTER_FILTER_Groupfilter: $$String:$Name=$$String:#MSTNAME


Here #MSTname is the name associated with the group name in the Group form.



Similarly for voucher:

[#Form:VOUCHER]

        On:Form Accept:Yes:FORM ACCEPT

On:Form Accept:Yes:Call:MasterFunction_Transaction


The voucher collection would include a statement like:


[collection:VoucherDetails]


Type:Voucher

;Parm Var: svfromdate: Date:$$Date:@@machinedate-30

;Parm Var: svtodate:DATE: $$Date:@@machinedate

FILTER:MASTER_FILTER_VOUCHER


where the MASTER_FILTER_VOUCHER is as follows:

[#SYSTEM:FORMULAE]


MASTER_FILTER_VOUCHER: $Date=#plainvchdate and $$String:$Vouchernumber=$$String:#VCHNUMBER







Reducing the data load transferred from the system to the cloud: this was done through incremental data load and through data compression achieved through Powerbi module. If a SQL Server instance instance is installed then the speed is quicked by having api request with a single table.



The above approaches can be considered for the initial and incremental data load from Tally to the cloud.


Name: Apoorv Chaturvedi

Phone:+91-8802466356

email:support@turbodatatool.com;support@mndatasolutions.com

youtube channel:@mndatasolutions2042.

Indiamart place:https://www.indiamart.com/mnbusinessintelligenceindia/









Monday, 13 January 2025

Service level agreements for Tally to Powerbi module and Tally to SQL Server Reporting level module

 Problem statement: Many firms use the top down approach in Tally for report generation. The firms face bottlenecks with regards to complete report generation. 

For example for collections the companies could look at debtor closing balance besides the one in 'Sundry Debtors'. 


For example: take the case of the following company:



Mobile app number:



However the receivables can be in other groups also.


Many customers desire a more comprehensive collection process by doing a bottom up approach.
The same is done with respect to our json extract from Tally.

Attached are the service level agreements that we can sign:
a.) Unlimited companies extract.
b.) Report refresh time of less than 5 seconds. 
c.) Complete trial balance extract up to any level of ledgers.
d.) No parameterization of reports required for ledgervouchers, stockitemvouchers and costcentrevouchers.
e.) Complete ledger and group level hierarchy flattening up to 10 levels.
f.) If the SQL Server database is used then using the incremental data load the extract time can be reduced to less than 10 seconds(on per voucher extract).
g.) No load on Tally in the time that the reports are been refreshed.
h.) Minimal tables for analysis: in our core module without the cost centre we propose to reduce the number of table extracts to one. 

  • The single extract shall include voucher ledger, inventory movements, item hierarchy and ledger hierarchy movements   in the Tally Powerbi module.
  •  The single table extract shall include, voucher ledger, voucher item, batch, bill history, item hierarchy and ledger hierarchy movements in Tally SQL Server module.

 



Note about inventory valuations: we shall offer only inventory valuations.based on weighted average valuations. The entries shall include only isoptional:no,isvoid:no,iscancelled:no,isdeleted:no, isinvoice:yes and purchaseaccounts:yes. Any direct expense entries associated with the isinvoiced entries should be in the same voucher.


A separate note on cost centre  module
Cost center module is separate than the core single table module. This is because there are lot of complications with regards to cost centre(specifically with regards to inventory movements). We are however extracting ledger and inventory movements with regards to the cost centre movements.

Contact us for a free demo on the Tally extraction methods using xml and odbc methods. You shall get sample Tally to excel extract for xml.

Name: Apoorv Chaturvedi
email: support@turbodatatool.com;support@mndatasolutions.com
Phone:+91-8802466356
Youtube channel:https://www.youtube.com/@mndatasolutions2042

 




Tuesday, 7 January 2025

Converting python code to set based sql logic

Problem statement: In case of python one uses the cursor based logic to query data on a row by row basis. Querying large data sets results in very slow query 


A sample query statement within python is as follows:


In this particular case the processing is taking place on a row by row basis. This consumes a lot of RAM(Random access memory).

We propose to change the processing to set based queries using SQL joins.

The proposed benefits:

a.) The parameter based reports can be changed to list reports using our methodology.

b.) The report reload and report refresh times should be reduced by at least 80%.


The benefits are as follows:

a.) With the bottom up approach one can design one's own KPIs and metrics.

b.) One can get better audit view to understand financial metrics in a better fashion.



Pre requisites:

one should be able to handle large datasets with ease since one is adopting the bottom up approach. One needs to also code the on fly business logic of python at the database level.

Contact Information:


Apoorv, M&N BI


Phone: +91-8802466356


Email: support@turbodatatool.com; support@mndatasolutions.com


YouTube: https://www.youtube.com/@mndatasolutions2042


Blog: mndatasolutionsindia.blogspot.com








Featured Posts

How a large education institution reduced its dependence on shortcut keys in Tally for report export-AI and automation

 Case study:One the largest educational institutions in central India. Its data is in 2 instances of Tally: Tally Prime and Tally edit log. ...

Our Most Popular Post