Wednesday, 15 January 2025

Mobile app developers/analytic app developers-optimizing the incremental load from Tally-reduce the time by 99 percent through datawarehouse

 Problem statement: App developers have synced the initial data from Tally. Now incremental data is to be captured.  Here we discuss the approaches for transactions and masters for the same. We would also look at how the datawarehouse can help reduce the incremental times by more than 99%.


Load through odbc: in case the application works through odbc then one would need to do a full extract each time since the odbc represents a full snapshot. There are more than 160 odbc extracts in Tally. Hence the process of incremental extract would need to be handled in the following manner:

a.) Nightly load

b.) Load over the fortnight.



Load through xml: Many developers like to sync the reports directly through the xml. Complete voucher extraction could put in a lot of load on the system. Hence it is recommended that the extraction of the voucher be done on a day by day basis(https://mndatasolutionsindia.blogspot.com/2025/01/reducing-time-for-mobile-refresh-tally.html).

Difference between ledger vouchers and voucher register: It is to be noted for the developer that the complete sales movement is captured in the ledger vouchers(drill down from the sales accounts in Tally to ledger vouchers). Voucher register gives movement by voucher types[including inventory movements] and hence it could exclude the journal movements.


 



 




Ledgervouchers is a parameterized report by svcurrentcompany, svfromdate and svtodate.

 

Voucher register is movement by vouchertypes only[including inventory]. The sales numbers in the voucherregister might not match the voucherledger movement(all required movement types).

 

 

Alternatives for the developer:

 a.) Extract the profit and loss report by using the following xml-sample has repeatvariables also.

Thereafter based on the year month selected drill down to the required voucher based on lledgerprimary. Alternatively based on the svfromdate and svtodate and company name the request can be sent to Tally to retrive the required sales vouchers:

sample xml code added.


Here ledgername is the input parameter.



Bottleneck: to extract the voucher (from which voucherledger report is extracted) one could need to extract the same on a day by day basis so that Tally does not hang. The same process of synchronization could take a lot of time.

 

Process of reduction of execution time:

we recommend using a datawarehouse and using the event based activities in Tally to capture the changes in transaction and masters(https://mndatasolutionsindia.blogspot.com/2025/01/reducing-time-for-mobile-refresh-tally.html). This will capture the changes as the entries into the voucher or the entries into the masters are happening.


Note: RAM of atleast 16 GB is required.

Internet connection needs to be always on[else synchronization could not happen]

 

Concurrancy impact would need to checked. If there are a lot fo developers then we need to check the load on the datawarehouse.

The onsite cost of hosting the data onto the database also needs to be checked.

Our recommendation would be to load the data onto a desktop tool like Powerbi or desktop database like sql server and then sync with the mobile app.

 

 

 


Can event based activities be used with xml export: as far as we know the same cannot be done with the usage of embedded tdl code.


Contact us for a demo.


Name: apoorv chaturvedi

email:support@mndatasolutions.com;support@turbodataotol.com

Phone:+91-8802466356

 

youtube channel:www.youtube.com\@mndatassolutions2042

 




Tuesday, 14 January 2025

Reducing the time for Mobile refresh Tally)-discussion on approaches

 Problem statement: Tally is a hierarchical database with a desktop application. In order to refresh the mobile apps one needs to sync the data with a nosql or relational database. The issue is how to do the incremental refresh.


Using the standard Tally XML export: Tally can export any report in the xml format. For example the Daybook can be exported using the following xml code.

The given xml request assumes that Tally is been used as a webserver 


<ENVELOPE>


<HEADER>


<VERSION>1</VERSION>


<TALLYREQUEST>Export</TALLYREQUEST>


<TYPE>Data</TYPE>


<ID>Daybook</ID>


</HEADER>


<BODY>


<DESC>


<STATICVARIABLES>


<EXPLODEFLAG>Yes</EXPLODEFLAG>


<SVEXPORTFORMAT>$$SysName:XML</SVEXPORTFORMAT>



</STATICVARIABLES>


</DESC>


</BODY>


</ENVELOPE>



The output of the same shall give a complete list of masters and transactions.

However there could be a need to only get transactions in the load. In such a scenario the following set of sample collections export could be used.

<ENVELOPE>


<HEADER>


<VERSION>1</VERSION>


<TALLYREQUEST>Export</TALLYREQUEST>


<TYPE>Collection</TYPE>


<ID>salesVouchers</ID>


</HEADER>


<BODY>


<DESC>


<TDL>

<TDLMESSAGE>

 <COLLECTION NAME="salesVouchers"IsModify="No">

<TYPE>Voucher</TYPE>

<FETCH>*.*</FETCH>

<COMPUTE>FOREXTRANSACTION_rohit:$$FOREXVALUE:$AMOUNT</COMPUTE>

<COMPUTE>Ledgeramount:$$COLLECTIONFIELD:$AMOUNT:1:ALLLEDGERENTRIES</COMPUTE>

<COMPUTE>HsnIgstRate_ledger:If $$IsEmpty:$.GSTOVRDNRate Then $$GetInfoFromHierarchy:$ledgername:"ledger":"GSTRate":$date:"GST":0:@@IGSTDutyHead Else $.GSTOVRDNRate</COMPUTE>

<SYSTEM Type="Formulae" NAME="opfilter">$$ISSALES:$VOUCHERTYPENAME</SYSTEM>

</COLLECTION>



</TDLMESSAGE>


</TDL>


</DESC>


</BODY>


</ENVELOPE>

In this scenario only the sales vouchers are been extracted based on the filter parameters for the vouchers. No master details have been extracted for the same.
However the complete extraction of the sales vouchers could result in a heavy load on the transaction system. In such a scenario it is best to extract the vouchers on a day by day basis beginning with the DSPFromdate and ending with the DSPTodate. The data from each voucher entry shall then be stored in the database.

Finding if the voucher is insert, upddate or delete. The same can be done through the field Remoteid in the xml export as given below:
VOUCHER REMOTEID="13excel". 

Note: all xml exports work for only the current company that has been opened in the xml voucher. If there are multiple companies then the xml export shall need to be done for each company individually.

Heavy load on the system: in case the load on the system is heavy then one could extract the voucher on a date by date basis using the SVFROMDATE and SVTODATE parameters.



<ENVELOPE>

<HEADER>

<VERSION>1</VERSION>

<TALLYREQUEST>Export</TALLYREQUEST>

<TYPE>Collection</TYPE>

<ID>VOUCHERINCREMENTAL</ID>

</HEADER>

<BODY>

<DESC>

<TDL>
<TDLMESSAGE>
 <COLLECTION NAME="VOUCHERINCREMENTAL" ISMODIFY="No" ISFIXED="No" ISINITIALIZE="No" ISOPTION="No" ISINTERNAL="No">
    <TYPE>VOUCHER</TYPE>
    <NATIVEMETHOD>*.*</NATIVEMETHOD>
    <PARMVAR>svfromdate: Date:$$Date:@@machinedate-300</PARMVAR>
    <PARMVAR>svtodate:DATE: $$Date:@@machinedate-200</PARMVAR>
   </COLLECTION>


</TDLMESSAGE>

</TDL>

</DESC>

</BODY>

</ENVELOPE>


In the above TDL code the PARMVAR for from and todate can be set to capture the data for the voucher incrementally.

Masters incremental extraction XML: In the case of masters there is little option in terms of incremental extraction using XML. One needs to sync the entire masters at one go. The same could take time.


New approach: for the new approach, the M&N BI developers looked at the following parameters for initial and incremental extraction.

a.) Unlimited companies: we desired to use the loopcollobj to extract the data from unlimited companies
b.) Lighter data extract: we decided to use json[used in a standard fashion for api extract]
c.) More complete extract than done by other SaaS providers.
d.) Lower load on the system
e.) Faster extract for the transactions and the masters: we decided to extract the data on a real time basis[as the voucher is entered into the system]. We desired to use the event based activities within Tally specifically ON form accept and on form delete.

f.) Updation of the masters as the voucher is entered or the masters are updated/inserted.
g.) Reducing the data load transferred from the system to the cloud.


For the same we developed custom collections to extract data from unlimited companies using the Tally function loopcollobj.

Reduce the load on Tally system while extracting the voucher data: In order to reduce the load on Tally during voucher extract the team took the following actions:
a.) Broke the voucher into child components.

Say









The same is extracted as follows:


Similarly voucher ledger details are extracted as follows;








After the extract was done in json files, the details were joined using sql joins using the composite keys companyname, masterid and so forth.


b.) Converting the heirarchial functions to SQL joins: The sql joins are atleast 70% faster than the heirarchial functions in terms of speed and consume atleast 50% less memory than the functions. The same concept was used in the database level and the Powerbi module to speed up the querying process.



Reducing the time to extract: Our team developed a datawarehouse and thereby captured the incremental data load using the form accept events for masters and vouchers in Tally. This would further speed up the extraction process.


Sample code:

For example for the masters the incremental data load logic could be as follows:

[Collection: Group_Crash]

Type: Group

Filter:MASTER_FILTER_Groupfilter

Fetch : Parent,ParentHierarchy,Masterid,Alterid,Isrevenue,Affectsstock,AffectsGrossprofit,Reservedname,Reservename+

,Issubledger,Istdsapplicable,Istcsapplicable,isbillwiseone,iscoscentreson,Istdsexpense,istcsapplicable,Isdeemedpositive


Compute :Area :$_Area

Compute :PrimaryGroup :$_PrimaryGroup

Compute :CompanyName :##SVCurrentCompany

Compute: Groupname: $Name


;select $Name,$Parent,$Area,$ParentHierarchy,$PrimaryGroup,$Masterid,$Alterid from Group_crash

[#SYSTEM:FORMULAE]


MASTER_FILTER_Groupfilter: $$String:$Name=$$String:#MSTNAME


Here #MSTname is the name associated with the group name in the Group form.



Similarly for voucher:

[#Form:VOUCHER]

        On:Form Accept:Yes:FORM ACCEPT

On:Form Accept:Yes:Call:MasterFunction_Transaction


The voucher collection would include a statement like:


[collection:VoucherDetails]


Type:Voucher

;Parm Var: svfromdate: Date:$$Date:@@machinedate-30

;Parm Var: svtodate:DATE: $$Date:@@machinedate

FILTER:MASTER_FILTER_VOUCHER


where the MASTER_FILTER_VOUCHER is as follows:

[#SYSTEM:FORMULAE]


MASTER_FILTER_VOUCHER: $Date=#plainvchdate and $$String:$Vouchernumber=$$String:#VCHNUMBER







Reducing the data load transferred from the system to the cloud: this was done through incremental data load and through data compression achieved through Powerbi module. If a SQL Server instance instance is installed then the speed is quicked by having api request with a single table.



The above approaches can be considered for the initial and incremental data load from Tally to the cloud.


Name: Apoorv Chaturvedi

Phone:+91-8802466356

email:support@turbodatatool.com;support@mndatasolutions.com

youtube channel:@mndatasolutions2042.

Indiamart place:https://www.indiamart.com/mnbusinessintelligenceindia/









Monday, 13 January 2025

Service level agreements for Tally to Powerbi module and Tally to SQL Server Reporting level module

 Problem statement: Many firms use the top down approach in Tally for report generation. The firms face bottlenecks with regards to complete report generation. 

For example for collections the companies could look at debtor closing balance besides the one in 'Sundry Debtors'. 


For example: take the case of the following company:



Mobile app number:



However the receivables can be in other groups also.


Many customers desire a more comprehensive collection process by doing a bottom up approach.
The same is done with respect to our json extract from Tally.

Attached are the service level agreements that we can sign:
a.) Unlimited companies extract.
b.) Report refresh time of less than 5 seconds. 
c.) Complete trial balance extract up to any level of ledgers.
d.) No parameterization of reports required for ledgervouchers, stockitemvouchers and costcentrevouchers.
e.) Complete ledger and group level hierarchy flattening up to 10 levels.
f.) If the SQL Server database is used then using the incremental data load the extract time can be reduced to less than 10 seconds(on per voucher extract).
g.) No load on Tally in the time that the reports are been refreshed.
h.) Minimal tables for analysis: in our core module without the cost centre we propose to reduce the number of table extracts to one. 

  • The single extract shall include voucher ledger, inventory movements, item hierarchy and ledger hierarchy movements   in the Tally Powerbi module.
  •  The single table extract shall include, voucher ledger, voucher item, batch, bill history, item hierarchy and ledger hierarchy movements in Tally SQL Server module.

 



Note about inventory valuations: we shall offer only inventory valuations.based on weighted average valuations. The entries shall include only isoptional:no,isvoid:no,iscancelled:no,isdeleted:no, isinvoice:yes and purchaseaccounts:yes. Any direct expense entries associated with the isinvoiced entries should be in the same voucher.


A separate note on cost centre  module
Cost center module is separate than the core single table module. This is because there are lot of complications with regards to cost centre(specifically with regards to inventory movements). We are however extracting ledger and inventory movements with regards to the cost centre movements.

Contact us for a free demo on the Tally extraction methods using xml and odbc methods. You shall get sample Tally to excel extract for xml.

Name: Apoorv Chaturvedi
email: support@turbodatatool.com;support@mndatasolutions.com
Phone:+91-8802466356
Youtube channel:https://www.youtube.com/@mndatasolutions2042

 




Tuesday, 7 January 2025

Converting python code to set based sql logic

Problem statement: In case of python one uses the cursor based logic to query data on a row by row basis. Querying large data sets results in very slow query 


A sample query statement within python is as follows:


In this particular case the processing is taking place on a row by row basis. This consumes a lot of RAM(Random access memory).

We propose to change the processing to set based queries using SQL joins.

The proposed benefits:

a.) The parameter based reports can be changed to list reports using our methodology.

b.) The report reload and report refresh times should be reduced by at least 80%.


The benefits are as follows:

a.) With the bottom up approach one can design one's own KPIs and metrics.

b.) One can get better audit view to understand financial metrics in a better fashion.



Pre requisites:

one should be able to handle large datasets with ease since one is adopting the bottom up approach. One needs to also code the on fly business logic of python at the database level.

Contact Information:


Apoorv, M&N BI


Phone: +91-8802466356


Email: support@turbodatatool.com; support@mndatasolutions.com


YouTube: https://www.youtube.com/@mndatasolutions2042


Blog: mndatasolutionsindia.blogspot.com








Monday, 30 December 2024

Unlock Actionable Insights with Bottom-Up Analytics – A Personalized Demo Offer

 Most businesses rely on readymade reports from various ERPs, which often follow a top-down approach. While this method starts with aggregates and drills down to transactions, it can overlook crucial details and limit flexibility in analytics. At Turbodata, we take a different path: a bottom-up approach. By capturing raw transactional data from multiple systems and rolling it up using advanced business logic, we provide insights that are deeper, clearer, and more actionable.


Top-Down vs. Bottom-Up: A Road Trip Analogy

  • Top-Down: Imagine planning a road trip by estimating your fuel needs based solely on the destination. This might ignore critical factors like terrain or mileage.
  • Bottom-Up: Our approach calculates fuel needs by factoring in distance, terrain, and car efficiency—ensuring a more precise and reliable journey.

Similarly, the bottom-up approach enables:

  • Flexibility: Generate aggregates tailored to your analytical needs.
  • Integration: Seamless alignment with multiple software systems due to raw data availability.
  • Custom Allocations: Allocate percentages dynamically based on group-level hierarchies.
  • Pattern Recognition: Identify trends and patterns that are invisible in top-down views.
  • Efficiency Gains: Provide a clearer and more accurate picture to optimize operations.

Common Business Challenges Addressed

Here’s how our approach addresses real-world problems faced by businesses:

  1. Where has the cash gone?

    • By analyzing ledgers, party ledgers, groups, and time views, we uncover hidden cash flow patterns.
  2. Why is sales stagnating?

    • Our system evaluates item, party, and time views simultaneously to pinpoint bottlenecks.
  3. Sales is increasing, but gross profits aren’t improving.

    • We analyze item, party, group, stock group, and time views to understand the root cause.

Let Us Demonstrate the Difference

To illustrate the power of our bottom-up analytics, we’d love the opportunity to conduct a personalized demo using a sample ERP system. This session will showcase how we can:

  • Integrate seamlessly with your existing systems.
  • Provide detailed insights that drive smarter decisions.
  • Enable flexibility and scalability in your reporting processes.

Please feel free to reach out to schedule a demo or ask any questions.

Contact Information:

We’re excited about the opportunity to collaborate and deliver actionable insights tailored to your business.

Warm regards,
Apoorv Chaturvedi
Team Turbodata


Let me know if you’d like further customization or additional materials to accompany this message!

Monday, 23 December 2024

Why a Datawarehouse over transaction system

 Query from the customer

Why should I keep a datawarehouse when I already have a transaction system?

The reasons are as follows:

a.) Consolidation requirements: One could have multiple data sources from which reports could need to be developed. The data sources could be from outside of core ERP also. Here the datawarehouse could be of help.

b.) Faster seamless reporting: If the reports involve high amount of data transactions then it could put a lot of load of Transaction systems. In such a case it is best to run the queries from a different copy of the transaction system.

c.) Minimal load on the transaction system[issue of On fly calculations]: many source systems use parametrized reports for generating the output. For generating consolidated analytics one could require the complete details. For example complete ledger movements in Tally are there in Ledger voucher reports. To get the output of each ledger one has to send the ledgername as a parameter in the ledger voucher report. Say there are 500 ledgers then we would have to hit the ledgervoucher report 500 times. One would require to hit the ledgervoucher report only once. For the same coding the business logic of the software could be required in the datawarehouse.


d.) Increase in number of users accessing the reports: many a times the reports need to be seen by large number of end users. One cannot do so over the transaction systems as it can lock the transaction system. The same needs to be done on a datawarehouse.


e.) I am already using the cloud based viewing platform, why should I use a datawarehouse: If one is using a cloud based datawarehouse[using google query and other technologies] then the following are the scenarios for using an on premise datawarehouse:

  • Need for security

 

  •   Need to consolidate data from multiple data sources
  • Need to give greater number of users access to information: the users could be internal and external
  •  Need for developing customized mobile apps and report applications
  •  Lower and more predictable costs: one does not have to pay per user basis. 

What are the cloud based datawarehouses: there are examples like snowflake, googlequery and many others.

When should I use  datawarehouse?

If one is having more than 15 power users for reports then one should start thinking of using a datawarehouse.


What are the cons of using an on site datawarehouse?

  • A certain fixed costs are involved
  •  Trained manpower including IT personnel might be required.


ROI of a datawarehouse? This is slightly tricky. The ROI needs to be calculated as follows:

a.) Saving of time in consolidarion of sheets

b.) Reducing the manual errors in spreadsheets

c.) Enhanced operations efficiency -this needs to be calculated differently

d.) Enhanced benefit of getting the right information to right people quickly.



For more benefits of the usage of the datawarehouse

Name; Apoorv Chaturvedi

email: support@turbodatatool.com;support@mndatasolutioons.com

Indiamart page: https://www.indiamart.com/mnbusinessintelligenceindia/

 


Difference between datawarehouse and a transactional system

 Many personnel believe that datawarehouse is simply a copy of the trasactional system. However there are some key differences between the transaction system and datawarehouse.


a.) Datawarehouse can be used for consolidation of data across multiple data sources. Datawarehouse can have two layers, one an ODS[operational data store] and the second is  star schema layer[denormalization layer to minimize the joins]




Why is the normalization layer required?

a.) Normalization layer helps consolidate the data from multiple data sources in a single database layer.

b.) Used for incremental data loads. One does not need to do a full load after normalization layer. Only inserts, updates and deletes will do.

c.) The on fly calculations of the software  are done in the normalized layer.



Thus the ODS layer is different from the transaction layer.


Why not simply use the transaction system code?

a.) The transaction system copy will have only the base data

b.) The on fly calculations of the software will not be there in the copy  of the transaction system. The business logic would need to be worked out.


The purpose of star schema.

The purpose of the star schema is to reduce the joins. Lower the number of joins the faster shall be the speed of query processing.

What about snowflake schema? snowflake schema is normalization layer. It is done when the dimension tables are very large.


Where do we start?

Understand what is a datawarehouse and see if it fits into your scheme of things.


Name: Apoorv Chaturvedi

email: support@mndatasolutions.com;support@turbodatatool.com

Phone:+91-8802466356




Incomplete collections-Tally (a different approach)

 Problem statement: a number of customers still have cash flow issues after taking the following actions: a.) Usage of Tally b.) Usage of th...