Tuesday, 14 January 2025

Reducing the time for Mobile refresh Tally)-discussion on approaches

 Problem statement: Tally is a hierarchical database with a desktop application. In order to refresh the mobile apps one needs to sync the data with a nosql or relational database. The issue is how to do the incremental refresh.


Using the standard Tally XML export: Tally can export any report in the xml format. For example the Daybook can be exported using the following xml code.

The given xml request assumes that Tally is been used as a webserver 


<ENVELOPE>


<HEADER>


<VERSION>1</VERSION>


<TALLYREQUEST>Export</TALLYREQUEST>


<TYPE>Data</TYPE>


<ID>Daybook</ID>


</HEADER>


<BODY>


<DESC>


<STATICVARIABLES>


<EXPLODEFLAG>Yes</EXPLODEFLAG>


<SVEXPORTFORMAT>$$SysName:XML</SVEXPORTFORMAT>



</STATICVARIABLES>


</DESC>


</BODY>


</ENVELOPE>



The output of the same shall give a complete list of masters and transactions.

However there could be a need to only get transactions in the load. In such a scenario the following set of sample collections export could be used.

<ENVELOPE>


<HEADER>


<VERSION>1</VERSION>


<TALLYREQUEST>Export</TALLYREQUEST>


<TYPE>Collection</TYPE>


<ID>salesVouchers</ID>


</HEADER>


<BODY>


<DESC>


<TDL>

<TDLMESSAGE>

 <COLLECTION NAME="salesVouchers"IsModify="No">

<TYPE>Voucher</TYPE>

<FETCH>*.*</FETCH>

<COMPUTE>FOREXTRANSACTION_rohit:$$FOREXVALUE:$AMOUNT</COMPUTE>

<COMPUTE>Ledgeramount:$$COLLECTIONFIELD:$AMOUNT:1:ALLLEDGERENTRIES</COMPUTE>

<COMPUTE>HsnIgstRate_ledger:If $$IsEmpty:$.GSTOVRDNRate Then $$GetInfoFromHierarchy:$ledgername:"ledger":"GSTRate":$date:"GST":0:@@IGSTDutyHead Else $.GSTOVRDNRate</COMPUTE>

<SYSTEM Type="Formulae" NAME="opfilter">$$ISSALES:$VOUCHERTYPENAME</SYSTEM>

</COLLECTION>



</TDLMESSAGE>


</TDL>


</DESC>


</BODY>


</ENVELOPE>

In this scenario only the sales vouchers are been extracted based on the filter parameters for the vouchers. No master details have been extracted for the same.
However the complete extraction of the sales vouchers could result in a heavy load on the transaction system. In such a scenario it is best to extract the vouchers on a day by day basis beginning with the DSPFromdate and ending with the DSPTodate. The data from each voucher entry shall then be stored in the database.

Finding if the voucher is insert, upddate or delete. The same can be done through the field Remoteid in the xml export as given below:
VOUCHER REMOTEID="13excel". 

Note: all xml exports work for only the current company that has been opened in the xml voucher. If there are multiple companies then the xml export shall need to be done for each company individually.

Heavy load on the system: in case the load on the system is heavy then one could extract the voucher on a date by date basis using the SVFROMDATE and SVTODATE parameters.



<ENVELOPE>

<HEADER>

<VERSION>1</VERSION>

<TALLYREQUEST>Export</TALLYREQUEST>

<TYPE>Collection</TYPE>

<ID>VOUCHERINCREMENTAL</ID>

</HEADER>

<BODY>

<DESC>

<TDL>
<TDLMESSAGE>
 <COLLECTION NAME="VOUCHERINCREMENTAL" ISMODIFY="No" ISFIXED="No" ISINITIALIZE="No" ISOPTION="No" ISINTERNAL="No">
    <TYPE>VOUCHER</TYPE>
    <NATIVEMETHOD>*.*</NATIVEMETHOD>
    <PARMVAR>svfromdate: Date:$$Date:@@machinedate-300</PARMVAR>
    <PARMVAR>svtodate:DATE: $$Date:@@machinedate-200</PARMVAR>
   </COLLECTION>


</TDLMESSAGE>

</TDL>

</DESC>

</BODY>

</ENVELOPE>


In the above TDL code the PARMVAR for from and todate can be set to capture the data for the voucher incrementally.

Masters incremental extraction XML: In the case of masters there is little option in terms of incremental extraction using XML. One needs to sync the entire masters at one go. The same could take time.


New approach: for the new approach, the M&N BI developers looked at the following parameters for initial and incremental extraction.

a.) Unlimited companies: we desired to use the loopcollobj to extract the data from unlimited companies
b.) Lighter data extract: we decided to use json[used in a standard fashion for api extract]
c.) More complete extract than done by other SaaS providers.
d.) Lower load on the system
e.) Faster extract for the transactions and the masters: we decided to extract the data on a real time basis[as the voucher is entered into the system]. We desired to use the event based activities within Tally specifically ON form accept and on form delete.

f.) Updation of the masters as the voucher is entered or the masters are updated/inserted.
g.) Reducing the data load transferred from the system to the cloud.


For the same we developed custom collections to extract data from unlimited companies using the Tally function loopcollobj.

Reduce the load on Tally system while extracting the voucher data: In order to reduce the load on Tally during voucher extract the team took the following actions:
a.) Broke the voucher into child components.

Say









The same is extracted as follows:


Similarly voucher ledger details are extracted as follows;








After the extract was done in json files, the details were joined using sql joins using the composite keys companyname, masterid and so forth.


b.) Converting the heirarchial functions to SQL joins: The sql joins are atleast 70% faster than the heirarchial functions in terms of speed and consume atleast 50% less memory than the functions. The same concept was used in the database level and the Powerbi module to speed up the querying process.



Reducing the time to extract: Our team developed a datawarehouse and thereby captured the incremental data load using the form accept events for masters and vouchers in Tally. This would further speed up the extraction process.


Sample code:

For example for the masters the incremental data load logic could be as follows:

[Collection: Group_Crash]

Type: Group

Filter:MASTER_FILTER_Groupfilter

Fetch : Parent,ParentHierarchy,Masterid,Alterid,Isrevenue,Affectsstock,AffectsGrossprofit,Reservedname,Reservename+

,Issubledger,Istdsapplicable,Istcsapplicable,isbillwiseone,iscoscentreson,Istdsexpense,istcsapplicable,Isdeemedpositive


Compute :Area :$_Area

Compute :PrimaryGroup :$_PrimaryGroup

Compute :CompanyName :##SVCurrentCompany

Compute: Groupname: $Name


;select $Name,$Parent,$Area,$ParentHierarchy,$PrimaryGroup,$Masterid,$Alterid from Group_crash

[#SYSTEM:FORMULAE]


MASTER_FILTER_Groupfilter: $$String:$Name=$$String:#MSTNAME


Here #MSTname is the name associated with the group name in the Group form.



Similarly for voucher:

[#Form:VOUCHER]

        On:Form Accept:Yes:FORM ACCEPT

On:Form Accept:Yes:Call:MasterFunction_Transaction


The voucher collection would include a statement like:


[collection:VoucherDetails]


Type:Voucher

;Parm Var: svfromdate: Date:$$Date:@@machinedate-30

;Parm Var: svtodate:DATE: $$Date:@@machinedate

FILTER:MASTER_FILTER_VOUCHER


where the MASTER_FILTER_VOUCHER is as follows:

[#SYSTEM:FORMULAE]


MASTER_FILTER_VOUCHER: $Date=#plainvchdate and $$String:$Vouchernumber=$$String:#VCHNUMBER







Reducing the data load transferred from the system to the cloud: this was done through incremental data load and through data compression achieved through Powerbi module. If a SQL Server instance instance is installed then the speed is quicked by having api request with a single table.



The above approaches can be considered for the initial and incremental data load from Tally to the cloud.


Name: Apoorv Chaturvedi

Phone:+91-8802466356

email:support@turbodatatool.com;support@mndatasolutions.com

youtube channel:@mndatasolutions2042.

Indiamart place:https://www.indiamart.com/mnbusinessintelligenceindia/









Monday, 13 January 2025

Service level agreements for Tally to Powerbi module and Tally to SQL Server Reporting level module

 Problem statement: Many firms use the top down approach in Tally for report generation. The firms face bottlenecks with regards to complete report generation. 

For example for collections the companies could look at debtor closing balance besides the one in 'Sundry Debtors'. 


For example: take the case of the following company:



Mobile app number:



However the receivables can be in other groups also.


Many customers desire a more comprehensive collection process by doing a bottom up approach.
The same is done with respect to our json extract from Tally.

Attached are the service level agreements that we can sign:
a.) Unlimited companies extract.
b.) Report refresh time of less than 5 seconds. 
c.) Complete trial balance extract up to any level of ledgers.
d.) No parameterization of reports required for ledgervouchers, stockitemvouchers and costcentrevouchers.
e.) Complete ledger and group level hierarchy flattening up to 10 levels.
f.) If the SQL Server database is used then using the incremental data load the extract time can be reduced to less than 10 seconds(on per voucher extract).
g.) No load on Tally in the time that the reports are been refreshed.
h.) Minimal tables for analysis: in our core module without the cost centre we propose to reduce the number of table extracts to one. 

  • The single extract shall include voucher ledger, inventory movements, item hierarchy and ledger hierarchy movements   in the Tally Powerbi module.
  •  The single table extract shall include, voucher ledger, voucher item, batch, bill history, item hierarchy and ledger hierarchy movements in Tally SQL Server module.

 



Note about inventory valuations: we shall offer only inventory valuations.based on weighted average valuations. The entries shall include only isoptional:no,isvoid:no,iscancelled:no,isdeleted:no, isinvoice:yes and purchaseaccounts:yes. Any direct expense entries associated with the isinvoiced entries should be in the same voucher.


A separate note on cost centre  module
Cost center module is separate than the core single table module. This is because there are lot of complications with regards to cost centre(specifically with regards to inventory movements). We are however extracting ledger and inventory movements with regards to the cost centre movements.

Contact us for a free demo on the Tally extraction methods using xml and odbc methods. You shall get sample Tally to excel extract for xml.

Name: Apoorv Chaturvedi
email: support@turbodatatool.com;support@mndatasolutions.com
Phone:+91-8802466356
Youtube channel:https://www.youtube.com/@mndatasolutions2042

 




Tuesday, 7 January 2025

Converting python code to set based sql logic

Problem statement: In case of python one uses the cursor based logic to query data on a row by row basis. Querying large data sets results in very slow query 


A sample query statement within python is as follows:


In this particular case the processing is taking place on a row by row basis. This consumes a lot of RAM(Random access memory).

We propose to change the processing to set based queries using SQL joins.

The proposed benefits:

a.) The parameter based reports can be changed to list reports using our methodology.

b.) The report reload and report refresh times should be reduced by at least 80%.


The benefits are as follows:

a.) With the bottom up approach one can design one's own KPIs and metrics.

b.) One can get better audit view to understand financial metrics in a better fashion.



Pre requisites:

one should be able to handle large datasets with ease since one is adopting the bottom up approach. One needs to also code the on fly business logic of python at the database level.

Contact Information:


Apoorv, M&N BI


Phone: +91-8802466356


Email: support@turbodatatool.com; support@mndatasolutions.com


YouTube: https://www.youtube.com/@mndatasolutions2042


Blog: mndatasolutionsindia.blogspot.com








Monday, 30 December 2024

Unlock Actionable Insights with Bottom-Up Analytics – A Personalized Demo Offer

 Most businesses rely on readymade reports from various ERPs, which often follow a top-down approach. While this method starts with aggregates and drills down to transactions, it can overlook crucial details and limit flexibility in analytics. At Turbodata, we take a different path: a bottom-up approach. By capturing raw transactional data from multiple systems and rolling it up using advanced business logic, we provide insights that are deeper, clearer, and more actionable.


Top-Down vs. Bottom-Up: A Road Trip Analogy

  • Top-Down: Imagine planning a road trip by estimating your fuel needs based solely on the destination. This might ignore critical factors like terrain or mileage.
  • Bottom-Up: Our approach calculates fuel needs by factoring in distance, terrain, and car efficiency—ensuring a more precise and reliable journey.

Similarly, the bottom-up approach enables:

  • Flexibility: Generate aggregates tailored to your analytical needs.
  • Integration: Seamless alignment with multiple software systems due to raw data availability.
  • Custom Allocations: Allocate percentages dynamically based on group-level hierarchies.
  • Pattern Recognition: Identify trends and patterns that are invisible in top-down views.
  • Efficiency Gains: Provide a clearer and more accurate picture to optimize operations.

Common Business Challenges Addressed

Here’s how our approach addresses real-world problems faced by businesses:

  1. Where has the cash gone?

    • By analyzing ledgers, party ledgers, groups, and time views, we uncover hidden cash flow patterns.
  2. Why is sales stagnating?

    • Our system evaluates item, party, and time views simultaneously to pinpoint bottlenecks.
  3. Sales is increasing, but gross profits aren’t improving.

    • We analyze item, party, group, stock group, and time views to understand the root cause.

Let Us Demonstrate the Difference

To illustrate the power of our bottom-up analytics, we’d love the opportunity to conduct a personalized demo using a sample ERP system. This session will showcase how we can:

  • Integrate seamlessly with your existing systems.
  • Provide detailed insights that drive smarter decisions.
  • Enable flexibility and scalability in your reporting processes.

Please feel free to reach out to schedule a demo or ask any questions.

Contact Information:

We’re excited about the opportunity to collaborate and deliver actionable insights tailored to your business.

Warm regards,
Apoorv Chaturvedi
Team Turbodata


Let me know if you’d like further customization or additional materials to accompany this message!

Monday, 23 December 2024

Why a Datawarehouse over transaction system

 Query from the customer

Why should I keep a datawarehouse when I already have a transaction system?

The reasons are as follows:

a.) Consolidation requirements: One could have multiple data sources from which reports could need to be developed. The data sources could be from outside of core ERP also. Here the datawarehouse could be of help.

b.) Faster seamless reporting: If the reports involve high amount of data transactions then it could put a lot of load of Transaction systems. In such a case it is best to run the queries from a different copy of the transaction system.

c.) Minimal load on the transaction system[issue of On fly calculations]: many source systems use parametrized reports for generating the output. For generating consolidated analytics one could require the complete details. For example complete ledger movements in Tally are there in Ledger voucher reports. To get the output of each ledger one has to send the ledgername as a parameter in the ledger voucher report. Say there are 500 ledgers then we would have to hit the ledgervoucher report 500 times. One would require to hit the ledgervoucher report only once. For the same coding the business logic of the software could be required in the datawarehouse.


d.) Increase in number of users accessing the reports: many a times the reports need to be seen by large number of end users. One cannot do so over the transaction systems as it can lock the transaction system. The same needs to be done on a datawarehouse.


e.) I am already using the cloud based viewing platform, why should I use a datawarehouse: If one is using a cloud based datawarehouse[using google query and other technologies] then the following are the scenarios for using an on premise datawarehouse:

  • Need for security

 

  •   Need to consolidate data from multiple data sources
  • Need to give greater number of users access to information: the users could be internal and external
  •  Need for developing customized mobile apps and report applications
  •  Lower and more predictable costs: one does not have to pay per user basis. 

What are the cloud based datawarehouses: there are examples like snowflake, googlequery and many others.

When should I use  datawarehouse?

If one is having more than 15 power users for reports then one should start thinking of using a datawarehouse.


What are the cons of using an on site datawarehouse?

  • A certain fixed costs are involved
  •  Trained manpower including IT personnel might be required.


ROI of a datawarehouse? This is slightly tricky. The ROI needs to be calculated as follows:

a.) Saving of time in consolidarion of sheets

b.) Reducing the manual errors in spreadsheets

c.) Enhanced operations efficiency -this needs to be calculated differently

d.) Enhanced benefit of getting the right information to right people quickly.



For more benefits of the usage of the datawarehouse

Name; Apoorv Chaturvedi

email: support@turbodatatool.com;support@mndatasolutioons.com

Indiamart page: https://www.indiamart.com/mnbusinessintelligenceindia/

 


Difference between datawarehouse and a transactional system

 Many personnel believe that datawarehouse is simply a copy of the trasactional system. However there are some key differences between the transaction system and datawarehouse.


a.) Datawarehouse can be used for consolidation of data across multiple data sources. Datawarehouse can have two layers, one an ODS[operational data store] and the second is  star schema layer[denormalization layer to minimize the joins]




Why is the normalization layer required?

a.) Normalization layer helps consolidate the data from multiple data sources in a single database layer.

b.) Used for incremental data loads. One does not need to do a full load after normalization layer. Only inserts, updates and deletes will do.

c.) The on fly calculations of the software  are done in the normalized layer.



Thus the ODS layer is different from the transaction layer.


Why not simply use the transaction system code?

a.) The transaction system copy will have only the base data

b.) The on fly calculations of the software will not be there in the copy  of the transaction system. The business logic would need to be worked out.


The purpose of star schema.

The purpose of the star schema is to reduce the joins. Lower the number of joins the faster shall be the speed of query processing.

What about snowflake schema? snowflake schema is normalization layer. It is done when the dimension tables are very large.


Where do we start?

Understand what is a datawarehouse and see if it fits into your scheme of things.


Name: Apoorv Chaturvedi

email: support@mndatasolutions.com;support@turbodatatool.com

Phone:+91-8802466356




Friday, 20 December 2024

Increase the report efficiency multifold -cost center analysis

 Problem statement: many customers use Tally for cost center analysis. The customers have a need to get profit and loss, cash flows statements for cost centers across multiple companies. In order to get the details from Tally itself the end client needs to work with the cost center voucher report such as follows:



The xml request is as follows:


<ENVELOPE>

  <HEADER>

    <VERSION>1</VERSION>

    <TALLYREQUEST>Export</TALLYREQUEST>

    <TYPE>Data</TYPE>

    <ID>ccvouchers</ID>

  </HEADER>

<BODY>

<DESC>

<STATICVARIABLES>

        <EXPLODEFLAG>Yes</EXPLODEFLAG>

            <SVEXPORTFORMAT>$$SysName:XML</SVEXPORTFORMAT>

<Costcentrename>Aerocitymohali</Costcentrename>


                  </STATICVARIABLES>

      <TDL>

        <TDLMESSAGE>

<REPORT NAME="ccvouchers" ISMODIFY="Yes">

  <ADD>Set : SV From Date:"20170401"</ADD> 

  <ADD>Set : SV To Date :"20250430" </ADD> 

                                       

  <ADD>Set : ExplodeFlag : Yes</ADD>

                </REPORT>

        </TDLMESSAGE>

      </TDL>

</DESC>

</BODY>

</ENVELOPE>



Here the given report needs to be accessed cost center by cost center to get the complete details.

Tally also has a number of variables whose value can be set.





Thereafter comes the issue of handling inventory valuations by cost center. This report export will also give the inventory movement details. These issues need to be handled.

The extract takes place only for one company.

Bottlenecks:
a.) Works  for only one company
b.) A number of variables can be adjusted.
c.) If there are a large number of cost center then the report needs to be accessed multiple times. One has to do the same repeatedly. say one has 50 cost centers then the loop has to be run 50 times.



Getting the cost of sales value: Tally in its profit and loss report gives the cost of sales details as follows:


The cost of sales needs to be derived too at the cost centre level. Hence the inventory valuations need to be also done at the cost center level or the inventory valuations need to be done perpetually using the weighted average rate.

In order to get the inventory valuation at the item level, we need to access the stock vouchers report.

Here again we need to get the data repeatedly from Tally. If there are 50 stock items then we need to hit the stock vouchers report 50 times.


Our resolution:
a.) Extract raw data from Tally multiple companies.
b.) Get raw cost center data for ledger and inventory movements.
c.) The complete group hierarchy should be enabled. We do not need just the ledgerprimary but also the other child attributes for our analysis.
d.) Tally extract through json should happen only once. The data processing should happen in Powerbi or SQL Server.


Say a client has 2 companies with  40 ledgers, 10 stock items then our productivity should increase by atleast 800 times(40*10*2).
The given cost center ledger movement has the complete group and ledger hierarchy across unlimited companies.



The debit and credit movements have also been specified in the same. From here we shall get the complete ledger movements.


Inventory movements we have  the second view as follows:


Thereafter we have consolidated the ledger and inventory movements onto a single table. In that single table we have done cash inflow, cash outflow, profit and loss and other analysis.


Limitations: The cost center cost of sales would included only those inventory movements where godown is involved. One needs to calculate the perpetual weighted average rate.





Alternatively we can get the weighted average rate on a perpetual basis from the inventory movements as shown below:


For getting the comprehensive data for profit and loss and cash flow statements please contact the following:


Name: Apoorv Chaturvedi

email: support@turbodatatool.com;support@mndatasolutions.com

Phone:+91-8802466356



Data insertion-Tally(approaches)

 Problem statement: Many softwares look to insert data into Tally from their application.This blog looks at issues and approaches for the sa...