Topics
Replies
olddirtypipster
01 Mar 2016, 01:19
OK. The issue here is that cAlgo is still incompatible with Visualstudio 2015. It you plan on doing interprocess communication with this API, you must use VS2013 and not VS2015.
Spotware has got to support an upgrade. What is taking so long?
@olddirtypipster
olddirtypipster
21 Aug 2015, 20:01
You do not have to wait on Spotware to add support to cTrader or cAlgo.
Just do the following:
1) Open you bot code from Visual Studio
2) Ensure the cAlgo.API.dll assembly is added as a reference.
3) Make sure that the bot has been recompiled to update your debug files or it won't breakpoint.
4) Attach debugger to cAlgo or cTrader
5) Set breakpoints where you need it
6)Start your bot and voila.
@olddirtypipster
olddirtypipster
19 Aug 2015, 21:33
To answer your question, given the nature of the 21st century financial markets, how good your work (trades) goes most definitely depends on how good and in what conditions your tools are. I will explain why, but first, you need to know where we are at currently, and recognize that many brokers, and platform developers seem to still be stuck in an antiquated 20th century model of how users should interact with the markets.
Not to steer away from the topic, but both platforms are ineffective for the 21st century. cTrader has lots of potential, but unfortunately, the developers seem fixated on designing this platform to be a prettier version of the same existing product, verses providing its users to interact with a 21st century high speed, high frequency market that demands that traders also be able to interact DIRECTLY with the raw financial data.
A closed black-boxed platform such as MT4 and to some extent cTrader should NOT be the central core of trading. What do I mean by this?
Given the raw speed of financial markets, the human brain is no longer in a position to interpret price movements at the speeds required to make successful trades by staring at price charts and static indicators. It is no longer optional, but MANDATORY that the user be able to employ statistical models to operate directly on the high speed data coming through (and by models I DO NOT mean simply using existing EA, or lifeless, lagging indicators).
An understanding of order-flow comes from the statistical analysis of how prices and rate of change prices per volume vary on a tick by tick basis over a moving average period. This can only be done algorithmically, and requires the user to have an intimate connection with pure, reliable level 2 non-aggregated data, that can them be filtered for degeneracy in prices and re-aggregated with several other venues for a complete picture of how the volatile currents in the vast, decentralized ocean of Forex liquidity ebb an
As implied before, the trading platform should no longer be designed to be the central tool for a trader. DATA as you cannot get what you need by staring at a price chart, and guessing what might happen next based on what happened before.
To know what is happening RIGHT NOW, you need instant and direct access to the data NOW. You also need the skills extract useful information from that instantaneous data, juxtapose it with what happened in the short-term intra-day past, be able to intelligently and very quickly browse past data for similar market conditions, reference and those past market conditions with what you have NOW, and then form a trading decision based on all this information presented back to you.
This requires a highly open-ended, Integrated Trading Developer Platform where traders may develop their own 'statistical code' and seamlessly apply this code to the data at hand where the, DATA becomes the central feature that determines the success of a forex trader. The Chart View and Trade execution interface would now be relegated to secondary features, or even completely separate features from the primary platform whose sole purpose would be to make sense of the real-time market intelligently.
Are we all beta testers? Yes. Sadly, the direction that Spotware seems to be taking is in making a newer, shiner, .NET'ier version of the same old, same old. They seem more focused on making a better MT4, than on revolutionizing how retail forex trading can and should be done.
@olddirtypipster
olddirtypipster
19 Aug 2015, 02:02
RE:
vitalikifel said:
Gday guys,
does someone else feels like all cTrader and cAlgo users are like tester of a Software in a open beta statium?
I do!
And it doesn't mater if it's the Web, Windows or Android version of cTrader and it doesn't even mater which Broker and if have testet some of them.
Here are some examples:
- Simple stuff like scrolling does sometimes not work
- Login takes ages (especially in the android app)
- Sometimes it don't load my settings and when i have saved them as template the template don't exist no more
- Sometimes when i close the web or the android version for just a few sec i have to log in again (expecially in the android version)
- And anyway the android version is sometimes so slow or just freezes when i do simple stuff like adding or modiffying an indicator or open up a chart
- When i do backtesting in cAlgo on the same pair with exactly the same settings it shows me each time completely different results
I really do like all the Software made by Spotware and its features and i think its the most innovative trading software of the past few years but there are some bugs they should not be there.
I really think about going back to MT4 even i hate it but with primitive bugs like that it really makes no fun and it even can be dangerous.
I don't want to wait ages till my phone opens up the app, then i just want to add an indocator but suddenly the app just freezes and i need to restart again and need to wait again till the app logs me in my account and take care all the time to dont click something wrong so the app freezes or shuts down again.
Or when i just want to scroll down my trading history in the Web app but i cant scroll at all.
Or i want to backtest a bot and without even changing anything i do two times the same backtest with completely different results.
So instead of adding more and more features to your software you should sit down and test the hell out of your current versions and try to fix as much as possible.
Have you consistently profited from cTrader and are you still doing so?
Have you consistently profited from MT4 and are you still doing so?
This should be the deciding factor on what you do next.
@olddirtypipster
olddirtypipster
18 Aug 2015, 00:47
( Updated at: 13 Oct 2015, 01:37 )
Unless your bot is secure (obfuscated), .NET is such that the assembly can be reflected and your trade strategy used against you.
My advice would be to place your sensitive strategy on a separate obfuscated assembly and connect to it via some TCP/IP means.
Trust no one.
@olddirtypipster
olddirtypipster
14 Aug 2015, 15:26
RE:
Spotware said:
We will think about that. Thank you for your suggestions.
This is a good first step. Please keep us informed and up to date. This is a major bug that needs to be addressed promptly.
@olddirtypipster
olddirtypipster
14 Aug 2015, 14:07
Two succinct quotes that highlight the principle concept behind reactive streaming:
"Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.".
"...The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary ... while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, back pressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded."
Done properly, this will solve your problem in one fell swoop.
You can read more at http://www.reactive-streams.org/
@olddirtypipster
olddirtypipster
14 Aug 2015, 13:51
Just because a data packet is in the past does not make it obsolete! Each packet carries with it a statistically important event that needs to be accounted for when building a wholesome picture of the market order flow. You cannot just discard these because they are in the past and the user-end is not keeping up .
This is like saying that because the user's cBot is unoptimized and slow, cAlgo will help them along by deleting some of its data so the user can catch up. If the user-end wants to skip packets because they cannot keep up, it should be left to their discretion.
cAlgo needs to GUARANTE the delivery of ALL data to the user verses delete this data because its internal buffer is full, and this is not what your proposed fix will be doing.
A mandatory requirement is to guarantee the delivery of ALL data to the user-end by flushing ALL past data to the Subscribed user before the arrival of the next packet.
This can be accomplished by:
1) reducing the packet size by sending incremental market depth updates instead of the full orderbook each time.
2) approach the data delivery algorithm heuristically; by that. The parallelized for loop should be intelligent enough to know when the internal buffer has more than one entry and flush the entire list out to the user as an Observable<T> stream
This is entirely possible, but it would require that you replace your blocking event handler with a reactive stream instead. The problem with using a for loop is that you guarantee asynchronous blocking in the loop when the cBot handler takes too long to return control back to the loop. You do not have this blocking effect when you use asynchronous Observable streams.
I suggest you review the two videos on reactive streaming to see where I am coming from. This illustrates how Reactive ExtensionsTM by Microsoft can help solve this problem you are facing.
https://channel9.msdn.com/Series/Rx-Workshop/Rx-Workshop-Event-Processing (VERY ILLUMINATING VIDEO!!)
https://channel9.msdn.com/Series/Rx-Workshop/Rx-Workshop-Writing-Queries
I seriously ask that you reconsider the way you will implement this fix. If you proceed with your current method, you will severely hamper the usefulness of this platform.
Holla back.
@olddirtypipster
olddirtypipster
14 Aug 2015, 13:17
RE:
Spotware said:
Dear Trader,
Neither 1) or 2) are corrent. Let's imagine that your cBot takes 5 seconds to handle market depth changed event and market depth changes every 2 seconds.
Current flow:
- Time: 0s. Market depth changes first time, cBot starts to handle State #1
- Time: 2s. Market depth changes second time, cBot is still handling State #1
- Time: 4s. Market depth changes third time, cBot is still handling State #1
- Time: 5s. cBot finished handling of State #1, cBot starts to handle State #2. State #3 is still in the queue.
How we are going to change the flow:
- Time: 0s. Market depth changes first time, cBot starts to handle State #1
- Time: 2s. Market depth changes second time, cBot is still handling State #1
- Time: 4s. Market depth changes third time, cBot is still handling State #1
- Time: 5s. cBot finished handling of State #1, cBot starts to handle State #3. State #2 is skipped because it is obsolete.
This would mean that the user will not have access to the complete data stream. cAlgo will now be discarding packets before they ever reach the user because they are in the past.
@olddirtypipster
olddirtypipster
14 Aug 2015, 11:55
RE:
Spotware said:
You can find a lot of charting libraries in the internet.
It would make more sense to beven toe ability to subclass an existing window verses building one from scratch!
The ChartView is good, but it would be better if it allowed users to inherit its classes, and exdend its existing features.
To do so would be staying true to the principles of OOP. design.
@olddirtypipster
olddirtypipster
14 Aug 2015, 11:18
RE:
Spotware said:
Dear Trader,
As we already said we plan to skip old market depth changed events if new event is already in the queue. We already did it for OnTick event. We also plan to update market depth in RefreshData method.
Why do you think that the above changes will not help you?
I think you will need to be clearer on what you mean by "skip old market depth changes". I suspect that on two occasions,Ii have misinterpreted your statement for a solution.
1) In the first instance, I assumed that you were going to deviate away from pausing full market depth all the time, to sending only incremental changes.
2) Then,I assumed that what you meant was that you would only push market depth when the cServer notifies of new depth.
If neither of these assumptions are correct, please identify the one that is. If not, i look forward to you providing additional clarity to this matter.
On the other-hand, should the first case be the correct assumption, you will DEFINITELY need to augment your API to make the orderbook useful in any way. I have explained the details of why this must be so in a previous post.
If the second assumption I made is correct, then this is not a solution since this is already being done; the orderbook window does not refresh when the market is inactive, meaning that it only responds when cServer is sending new depth data.
@olddirtypipster
olddirtypipster
13 Aug 2015, 18:35
RE:
Spotware said:
When you move away from sending a complete orderbook to sending incremental changes...
Currently there are no plans to change market depth API. We just plan to invoke market depth changed event only on latest available data as we already did for OnTick event.
And another thing...
There are times when the market is not very active, that the orderbook display is stationary.
Does this not mean that you are ALREADY only sending updated data as it comes in? Why then say that you will fix the problem by doing something that you are already doing???
This makes no sense at all...
@olddirtypipster
olddirtypipster
13 Aug 2015, 18:21
RE:
Spotware said:
When you move away from sending a complete orderbook to sending incremental changes...
Currently there are no plans to change market depth API. We just plan to invoke market depth changed event only on latest available data as we already did for OnTick event.
In this case, I can guarantee that the impact of this is minimal, as you will simply be eliminating repeated prices.This will not nearly reduce the amount of data being transmitted, and will most certainly solve this issue.
Look up.
See that post above right there? Contained in that is a solution that will work. If you can't or don't want to do it, I can. Just say the word when.
@olddirtypipster
olddirtypipster
13 Aug 2015, 18:13
RE:
The issue is not with the cBot. The issue is with the way the cAlgo posts updates internally. Consider the critical section of my cBot code:
protected override void OnStart()
{
_marketDepth = MarketData.GetMarketDepth(Symbol);
_marketDepth.Updated += MarketDepthOnUpdated;
}
private void MarketDepthOnUpdated()
{
if ( _marketDepth.AskEntries.Count > 0 && _marketDepth.BidEntries.Count > 0 )
{
CAlgoOrder order = new CAlgoOrder()
{
AskPrices = _marketDepth.AskEntries,
BidPrices = _marketDepth.BidEntries,
BrokerName = Account.BrokerName,
CurrencyPair = Symbol.Code,
ServerTime = Server.Time, //Breakpoint placed here for a confirmation test (see below)
};
// _clientCommunicationPipeline.Add(order); uncomment to Add the order to the BlockingCollectionPipeline<MarketDepthOnUpdated>
// DoWorkDirectly(order) //uncomment to do the job directly without using the BlockingCollection.
}
TEST 1: In the first test I carried out, I filled the prices as they came in to a BlockingCollection and used a single threaded foreach loop to pull individual prices from the orderbook. Lag occured within the first 10 minutes of use. Placing a breakpoint on the highlighted line, the internal cAlgo.Server.Time remained synched with current time.
TEST 2: I replaced the single threaded foreach loop with a Parallel.ForEach loop to pull prices from my BlockingColelction. prices were pulled in significantly quicker, and lag was less severe, but still present. It took around 30 minutes to see significant lag. Placing a breakpoint on the highlighted line, the internal cAlgo.Server.Time remained synched with current time.
TEST 3: I did not use the Blockingcollection, but instead, performed the Job directly as the prices came in. Lag was 100% eliminated provided that the job was not CPU intensive, but if it bacame too time consuming, lag persisted.
The first two tests would have obj#viously led to lag if we couldn't pull out prices fast enough, but the third task was a dead give-away that the fault laid solely with cAlgo. What was occurring here is that cAlgo also has its on internal buffer that is storing prices before they are being posted to the client bot.
As the UpdateEvent that notifies the MarketDepthOnUpdated is blocking, if it is blocked for too long due to long processing time, then its internal bufer becomes backlogged in a way similar to how the buffer in TEST1 and TEST 2 became backlogged. As a result, lag persists.
To confirm this, I commented out the two lines of code shown above, and placed a breakpoints for a few seconds each time over the course of several minutes. Sure enough, what I found twas that after a few times of doing this, I was able to get Server.Time to be permanently lagged behind in time!
The only way I could set the time back to normal was to restart the bot.
Now we see that the user is faced with a permanent dilemma if they need to perform CPU time intensive analysis of the incoming data:
1) If they opt to store the data in a buffer, and forloop each item for analysis, the buffer will overrun, and they will lag.
2) If they opt to eliminate their buffer storage and perform the CPU intensive analysis immediately as it comes in, each time they perform an operation on the packet they will block any further update posts from cAlgo's _marketDepth.Updated event post, and cAlgo's internal buffer will overrun, eventually resulting in lag.
Sorry. But cAlgo is at fault here! It was not designed with the premise of the following in mind.
real-time reliability (no missing data, and reflects the current market)
adaptability (can adapt to varied trading conditions; slow/fast data in this case)
extendibility, scalability and modularity (allows useds to interface with their financial data in a way that adheres to the first two conditions.)
So what are the REAL solutions?
Firstly, reducing datapacet size will definitely help; the less immediate analysts that neds to be done, the better.
Secondly, cAlgo's internal thread that is responsible for posting _marketDepth.Updated NEEDS TO BE PARALLELIZED! This should not be running on a single thread, but should be grabing as many updates from its own internal buffer as quickly as possible. This will minimize the degree of internal buffer backlog.
LAstly, deprecate the use of Events when posting your data. This should be done as a Reactive Stream that a user subscribes to instead. elow is an example of a classic case, and how I converted it to a stream.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Classic case of using and handling events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//MarketDataEngine class inside the main cAlgo/cTrader engine
public class MarketDataEngine : ThreadObject
{
internal MarketDepthEntry[] marketDataInternalBuffer; //assume this is being filled asynchronously by some TCP/IP connection to cServer
//this is the event that cBot must subscribe to in order to get market depth updates
public delegate void Updated( MarketDepthEntry entry );
public event Updated MarketDepthNotification;
//Here, the Worker thread posts a market depth notification as rapidly as it can using a for loop (thread safety has been omitted for clarity)
protected override void WorkerThread(object parameters)
{
//serialized for loop I was talking about. This should really be parallelized.
for(int tick = 0; tick <= 1000000; i++)
{
if ( MarketDepthNotification != null )
MarketDepthNotification(marketDataInternalBuffer[tick]);
}
}
}
//MArket depth consumer class inside cBot
public class ConsummerClass
{
private MarketDataEngine _marketDepth;
public ConsumingClass()
{
_marketDepth = new MarketDataEngine();
//This is te classic way of subscribing to
_marketDepth.Updated += MarketDepthOnUpdated
}
//This is the classical way of handling notifications from the client consumer class (In this case cBot).
//we would like to go to town with this data, but the lag we incur makes this impossible.
//After a few minutes, we find that the data being sent is no longer current since the MarketDepthEngine
//is sending us old data backlogged in its internal MarketDepthEntry[] buffer. So sad.
private void MarketDepthOnUpdated(MarketDepthEntry entry)
{
//This blocks future updates causing lag as is will cause cAlgo's internal buffer to backlog as it wait until it can next post an update.
CpuIntensiveAnalysisOfData(entry);
}
}
~~~~~~~~~~~~~~~~~~~~~~Here is how it is done using reactive Extensions (this is where the magic begins)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
public class ObservableMarketDataEngineStream<T>
{
internal MarketDepthEntry[] marketDataInternalBuffer; //assume this is being filled asynchronously by some TCP/IP connection to cServer
//this is the event that cBot must subscribe to in order to get market depth updates
public delegate void Updated( MarketDepthEntry entry );
public event Updated MarketDepthNotification;
//This is how we convert our event to be posted into a NON-BLOCKING continuous stream of data
public IObservable<int> MarketDataStream
{
get
{
return Observable.FromEvent<Tick, int>(h => (int i) => h(i),h => TickNotification += h,h => TickNotification -= h);
}
}
//Here, the Worker thread posts a tick notification every millisesond
protected override void WorkerThread(object parameters)
{
//still should be parallelized as part of the cAlgo optimization strategy
for(int tick = 0; tick <= 1000000; i++)
{
if ( MarketDepthNotification != null )
MarketDepthNotification(marketDataInternalBuffer[tick]);
}
}
}
//Tick consumer class iside of the cBot extension
public class ConsummerClass
{
private DataStream _stream;
public ConsumingClass()
{
_marketDepth = new ObservableMarketDataEngineStream();
MrketDataStream = _marketDepth.MarketDataStream.
}
public IDisposable MrketDataStream
{
get;
set;
}
}
The client can then subscribe to the stream as follows:
var dataStream = ( from stream in MrketDataStream select stream).Subscribe(DoStuffFromThisDelegate);
void DoStufFromThisDelegate(MarketDataEntry entry)
{
}
or perform LinQ queries directly with the stream in a non-blocking fashion like so:
var filteredDataStream = ( from stream in MarketDataStream where ( stream .Price > 1.54670 && stream.Volume > 50000 ) select stream).Subscribe(DoStuffFromThisDelegate);
This is truly magical!
There are some important things to note hare:
In the first case, because of the way events are posted and handled, operations that take a long time to complete will cause cAlgo/cTrader to block future posts, causing it's internal buffer to backlog.
In the modified case, because the events have been converted into reactive streams, no blocking issues inside cAlgo's engine are eliminated while streaming.
@olddirtypipster
olddirtypipster
13 Aug 2015, 16:35
RE:
Spotware said:
For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.
Your idea of a solution from what I gather is to send updates only when the orderbook has changed, with only those changed orders, instead of sending the entire orderbook. This will significantly reduce data throughput overload and is a viable solution.
However, please note that unless you modify the data packet to include information to tell the client what price now needs to be removed from the newly updated orderbook, you will incur other problems.
Currently, you re posting a complete orderbook with an IReadonlyList<MarketDepthEntry> of price depth where each element contains a price level of the form:
public sealed class MarketDepthEntry
{
public decimal Price{get;set;}
public double Volume {get;set;}
}
When you move away from sending a complete orderbook to sending incremental changes, you will need to add atleast two additional fields to inform the client of the new changes that have taken place between the current and previous orderbook. This information is critical for the client to reconstruct the orderbook for each incremental change. At a minimum, you will require the two following fields
public MDUpdateAction Action{get;set;} //informs client of a NEW, UPDATE or DELETE for this price level. This occurs if contract volume for this price has increased, decreased, or null.
public DeleteReason Reason{get;set;} //informs the user of the REASON for the requirement to DELETE this price level (CANCELED, or FILLED)
The other two fields are for the purpose of supporting non-aggregated market depth as they allow the client to keep track of each individual price. These are:
public int MDEntryID {get;set); //a unique Id that tags each price level so users may track individual contracts;
public int LpID {get;set;} //a unique Id that tags each price level so clients may track the pseudo-origin of the contract. It will not reveal the actual liquidity provider, but would allow the client to differentiate between contract origin.
These last two fields are OPTIONAL, and are at the discretion of the broker to fill them or leave them NULL
The fist two fields are critical tor the changes you proposed to solve the existing problem.
If the order-book is sent incrementally instead of full, packet size is reduced, and buffer overrun is ameliorated. It will go a long way to solving the issues observed.
holla back.
-OldDirty-
@olddirtypipster
olddirtypipster
12 Aug 2015, 19:06
If your aim is to revamp the current way MarketData updates are posted, I would like to suggest a few changes that more closely reflect true FIX protocol when posting incremental orderbook updates.
You can research and validate these these provisions at the official www.FIXprotocol.org website. The modifications I would suggest are as follows:
Each price update should have the following codes attached:
A MDUpdateAction that informs the client of the nature of the update that occurred to the most recent price on the incremental update
A DeleteReason that informs the user of why a price level needs to be deleted form the previous orderbook.
A MDEntryID that assigns a unique integer id to a price that is being added or updated to the newest incremental update. This id is reset when this price is deleted.
A LpID which, for non-aggregated market depth, ties each price level to a unique ID. While this will not reveal the originating liquidity provider, it will allow the client to differentiate between prices and their providers.
The new MarketDepthEntry structure would be as follows:
public sealed MarketDepthEntry
{
public decimal Price{get;set;}
public double Volume {get;set;}
public MDUpdateAction Action{get;set;}
public DeleteReason Reason{get;set;}
public int MDEntryID {get;set;
public int LpID {get;set;}
}
where MDUpdateAction and DeleteReason are enums.
Hopefully, this the direction you plan to be taking.
I'd be very interested to hear what you think.
-OldDirty-
@olddirtypipster
olddirtypipster
12 Aug 2015, 18:31
RE: RE:
In reading your last response again, it seems clearer now. Your aim is to only post changes to the orderbook, and not the full orderbook. This is similar to how it may be done using raw FIX protocol (incremental updates verses the full book every time).
What is very important here then, is that you provide a means for the user to know when a bid/ask price is no longer on the orderbook.
For example, the first time you send the book, you might have the following depth:
1.55645 @ 100000
1.55643 @ 50000
1.55640 @200000
Now, let us say that you no longer have 1.55643 on the book. When you next send an incremental update, you need to bear in mind that on the user end, I still have price 1.54643 @ 50000 on my last refresh. You need to implement a mechanism that informs the client that 1.55643 has been removed from the orderbook during the next refresh.
What would also be brilliant is if you could also let the client know why this price is no longer on the book (was this contract cancelled, sold out, or what?). The suggestion you made could work in reducing data throughput overload, but unless you accomodate for the fact that the user must now do most of the work in updating their latest orderbook using the most recent cServer publishes, everything is rendered useless.
I would be very interested to know how this new f
This makes sense.
olddirtypipster said:
Spotware said:
When you say: "We plan to skip old market depth changed events if new event is already in the queue"are you implying that you plan to implement a fixed circular buffer that will purge old events not taken from the buffer, to leave space for newer event? If this is the case, then the result would be gaps in the MarketData stream on the user end. If I understood you correctly, then I strongly advise against this.
No, the plan is not to invoke user handlers on obsolete events. API objects will be updated by every message. For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.
Could you then be clearer as to what you define to be an 'obsolete' event? In my mind, If a market data event occurred, then it is important and I want to know about it. How can it be obsolete in this case?
@olddirtypipster
olddirtypipster
12 Aug 2015, 18:17
RE:
Spotware said:
When you say: "We plan to skip old market depth changed events if new event is already in the queue"are you implying that you plan to implement a fixed circular buffer that will purge old events not taken from the buffer, to leave space for newer event? If this is the case, then the result would be gaps in the MarketData stream on the user end. If I understood you correctly, then I strongly advise against this.
No, the plan is not to invoke user handlers on obsolete events. API objects will be updated by every message. For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.
Could you then be clearer as to what you define to be an 'obsolete' event? In my mind, If a market data event occurred, then it is important and I want to know about it. How can it be obsolete in this case?
@olddirtypipster
olddirtypipster
01 Mar 2016, 22:37
RE:
olddirtypipster said:
OK. Never mind that. It works now in VS2015. All of a sudden. Not sure why, but it works.
@olddirtypipster