Hi Kobra,
What test are you talking about, exactly? AFAIK Clementi (the author of those tests) hasn't done any on-access scanning tests.. Or your talking about the recent "Retrospective/ProActive" tests? If so I'd strongly recommend reading what these tests are -- these are on-demand tests too... (OK I can tell you as well: what these tests actually do is scan a database of new viruses with a scanner with old virus database. This shows how the scanner performs if you fail to update it. True, some scanners are more tolerant, some are less (the case of avast) but hey, that's why there's the cool auto-updater, right??).
While I don't take full stock in Clementi's tests, and I *DO* understand the methodology behind to some extent despite my confusing wording. =) Anyway, basically what this test is, is to test generic detection and heuristics. Correct? Obviously, his way of doing this, by simply using retroactive databases could probably be considered a flawed method, but I think it also could be considered a reasonable way to test heuristics and generic protection, no? Whats most concerning, is the fact that Avast in some of the tests, appears to not exhibit *ANY* heuristics or baseline detections without definitions in place. If that statement is incorrect, please let me know, because on the surface this appears to be showing this. I did note very good on-demand, updated definition performance from Avast in his tests, which is to some extent comforting - but still not up to the level of other similar products.
This application has been discussed here on the forum multilple times. See e.g. http://forum.avast.com/index.php?board=2%3baction=display%3bthreadid=778%3b . All I can say is that testing AV software on non-existent, artificial samples just doesn't make any sense. We will NOT be adding any mechanisms to detect the AVTest-and-the-like samples into avast just to make it pass those tests... Also, to get an idea of who you're dealing with you may try to visit their "corporate website" http://www.damselsoft.freeservers.com ...
Whoever produced it isn't as important to me is to why generic signatures are missed, and when I combine this with Clementi's results of heurisitic/general compares, it only adds a bit to my worry. =) Some AV's pick up the AVTest3.0's signatures as "Generic or Variant" indicating a heuristic hit - which IS comforting. I'm not so much interested in actual signatures being added for it, as I am about heuristic pickup.
Basically, what i'm saying is, i'm looking for some "Confirmation" that Avast has a deep heuristic system at work somewhere. Can you point me to any specific tests in this regard, or data to shed some light on this? After a poor experiance with some AV's, and after testing 15+ different products, i'm actually incredibly impressed with this latest Avast (as long as I turn off the basic interface lol!). But I want to make sure my impressions are more than skin deep - and the AH is there.
Thank you in advance!
PS: On the bright side, Avast has picked up every single badguy i've thrown at it, including rebased/repacked, masked, and altered Trojans, Trojan-Launchers and Trojan Downloaders. It should also be noted that only 4 AV's in existance can pick ALL of these samples up (approx 20 of them). These are in my person collection, collected over the internet in my encounters in the last months. NOD32 completely failed on every single threat. So far, my *REAL* tests are have Avast pegged @ 100%, but I will be the first to admit my testing pool is limited to about 150 threats. Hope my above question can be answered!