Avast WEBforum
Other => Viruses and worms => Topic started by: REDACTED on September 22, 2018, 04:00:27 PM
-
We have our site blacklisted as phishing info.santander.com.uy, properly called from the main site www.santander.com.uy.
Can you please clarify the reasons behind this decision and how to unblock it please?
Thanks!
-
Not there: -https://www.santander.com.uy
Consider on reverse DNS: Invalid URL
The requested URL "[no URL]", is invalid.
Reference #9.af8e7b5c.1537626237.158db467
Also: https://www.virustotal.com/#/ip-address/104.82.201.165
VT responds Oops, I know nothing about this item.
Hi there, my name is Win32.Helpware.VT... certain antivirus labs also call me W32.eHeur.BadNews.GAFE, I guess it is because every time I appear they get very upset. It looks like you found a hole in my malware net...
IP address "104.82.201.165" not found
Re: https://toolbar.netcraft.com/site_report?url=https%3A%2F%2Fwww.santander.com.uy
and https://aw-snap.info/file-viewer/?protocol=not-secure&ref_sel=GSP2&ua_sel=ff&chk-cache=&fs=1&tgt=d3d3LnN8bnR8biN7fS5eXW0udXlg~enc
See domain search results here: https://www.virustotal.com/#/domain/www.santander.com.uy
CLean MX alerts PHISHING: https://www.virustotal.com/#/url/28a15f42b9e6b0f6a5d65dcf69e7ac145a7e14c0999653c448c228e1bbaa8b72/detection
polonus
-
Hi polonus, thanks for your reply.
I've already checked all the usual sites for anomaly detection and found nothing so far indicating a problem.
VirusTotal reports no problem against www.santander.com.uy nor against info.santander.com.uy (the flagged domain). NetCraft also reports no problems.
Also the info site es going through CloudFlare on ip 104.20.249.118, so I don't think that's a problem either.
Do you have any insight as of the reasons Avast has for flagging a domain? Since it's not clear at all for us what may be wrong.
Thanks,
G.-
-
-> https://sitecheck.sucuri.net/results/info.santander.com.uy
-> https://zulu.zscaler.com/submission/f5c3cf45-d2e3-4850-acf6-9b1e1f12cc3a
-> https://www.virustotal.com/#/url/8ac9b726878c230c4f8a5cabb6904d305401c0893c4940f4e793c01c58d71883/detection
You can report a suspected FP (File/Website) here: https://www.avast.com/false-positive-file-form.php
-
Hi, thanks for your reply!
I already reported the problem on that url, but I don't know there's someone there today.
So I'm trying to understand WHY the site was flagged in an attempt to fix it ASAP.
Certainly there's no phishing on that domain nor was it compromised by any means, so there's something about our domain that the avast algorithm didn't like.
New site went live yesterday and we have no real timeframe to wait until monday until someone from avast reviews the complain.
Any insights about what may be are really appreciated.
Thanks again,
G.-
-
I already reported the problem on that url, but I don't know there's someone there today.
The guys from threat lab are also working on weekend.
-
Do you know how long does it usually take to fix a problem like this? Or if a support account exists?
-
Usually a few hours.
-
Website is insecure in this respect according to Tracker SSL:
Website is insecure by default
100% of the trackers on this site could be protecting you from NSA snooping. Tell -santander.com.uy to fix it.
Identifiers | All Trackers
Insecure Identifiers
Unique IDs about your web browsing habits have been insecurely sent to third parties.
dafa79a834b798f7ce114bcba5e116ee41537635149 info dot santander dot com dot uy __cfduid
Legend
Tracking IDs could be sent safely if this site was secure.
Furthermore consider the 9 security errors here: https://webhint.io/scanner/4c67feca-5580-4371-8555-b2c0039417a7
4 vulnerable retirable jQuery libraries found: https://retire.insecurity.today/#!/scan/022399493f4b1d01b69cf4428cf2223cd8866a2f8f8711f3d8eee311375093af
polonus (volunteer website security analyst and website error-hunter)
-
Hi polonus, thanks for your pointers.
I've already checked most of the sites, and besides some recommendations and best practices that could be followed, none of that justifies Avast to classify the santander.com.uy domain as phishing.
The main questions here are:
Why Avast is classifying as phishing a site which obviously isn't.
Why does it take so long on their part to respond, given that there are customers complaining online about it for hours.(see attached image)
This is is really damaging on many ends, not justifiable on an outdated jquery library.
-
We have received no response so far from Avast, does someone know a better way to report the issue?
We've been having problems all weekend because of the misclassification.
Thanks
-
Hi, I'll forward it for you.
-
Info: It will be fixed in next VPS update.
-
@Asyn, thanks for your reply!
Do you know when is the next VPS update scheduled?
-
You're welcome. (Nope, but most probably later today...)
-
Another question.
Anyone knows if it possible to know, when and why did Avast classify the domain as vulnerable?
Thanks
G.-
-
Anyone knows if it possible to know, when and why did Avast classify the domain as vulnerable?
Let's see, the talkative threat lab guys will be back on Monday... ;)
-
Hi,
There are two issues:
First, info.santander.com.uy was really blocked from 30th July (!!) till just now. I checked the statistics and it seems that only ~30 users saw the detection in the past 7 days, so it is not likely main cause.
Secundly, there is a wide spread infection of Mikrotic routers that appends malicious code to legit websites. This would also show as HTML:Script-inf.
To sum it up, considering that you say there were many people complaining this weekend, I would bet it is mainly because of the second possible reason.
-
HonzaZ, thanks for your answer.
The domain was in development mode until friday when it went live, so somehow you classified a site under development as phishing.
The people is complaining about the block this weekend since the site went live this weekend, and I don't think the infected routers are the problem since it's running in AWS behind an ELB and using Cloudflare.
Don't you have the original reason why you blocked the domain? It would be really appreciated since Avast is widely used and we need to avoid future events like this.
-
... I don't think the infected routers are the problem since it's running in AWS behind an ELB and using Cloudflare.
It is not because your routers are infected, it is because the users' routers are infected. Search for "Mikrotik infection" and you will see what I am talking about, our blogpost is not published yet.
In short, users' routers were infected in such a way that they injected a malicious script into HTML content of all URLs (google, microsoft and most likely also santander), which then resulted in HTML:Script-inf detection. This has nothing to do with security on your side, it is just another type of man-in-the-middle attack.
As this was a massive outbreak, it is in my opinion much more probable that the detection was caused by infected routers (a number that I cannot estimate) than by the blocked "info." subdomain (a number which I estimate to 30 users total).
-
That may indeed be the case.
But even so, why was the subdomain blocked?
Is it possible that given this Mikrotik infection made the Avast engine think the subdomain was indeed compromised and blocked?
Until yesterday scanning a file on disk with a reference to info.santander.com.uy resulted in a positive (I'm attaching you a file that yesterday was triggering a HTML:Script-Inf, just scanning the file, no network involved.
So the domain was obviously blacklisted.
-
Hi,
These two things are definitely not related (because the block happened before the infection of mikrotik devices), and I am not saying we didn't block it - just that there are two causes to the same result.
As for the reason, it is too far back; I have no idea why it was blocked back in July.
-
Ok, that's clear.
None the less we need to understand what may have happened in order to avoid this in the future, since believe me we had an hectic weekend because of this.
I've seen reports as far back as 2016 of Avast classifying sites such as google.com of phishing: https://forum.avast.com/index.php?topic=186672.0
I assume you have an automatic procedure without human vetting before blacklisting a domain, do you have a best-practices list or criteria to avoid being flagged again? Since this is an automated procedure we're unable to rely on someone from your team actually verifying the validity of a report.
Thanks again!
-
Just not to let this topic die.
Is there any best-practices or recommendation guide to avoid being automatically classified as a phishing site?
Thanks again!
G.-
-
Best way to go is develop with security in mind, that means keeping up with "best practices".
For any form of successful compromise, the attacker would be advised: "Use the sourcecode, Luke",so do not allow someone to intelligently poke into developer module accounts, as you may see 4 attempts per account, and they always will attack the account of your boss, as he may not be aware like you of user enumeration attacks. Go to the server application logs to know what is eventually going on.
Persistent attackers, always form a challenge. Be fully aware of the attack surface you leave open or haven't got available
(securely handled tokens, encrypt all your traffic, iFrame tags can lead to any form of exploit. Very bad, but also very commonly found).
On the subject of PHISHING. To protect your Webserver from infection, make sure you protect your root password:
Make sure you don't use it across unencrypted connections.
Make sure you don't allow direct root login over the network so nobody can perform online brute force and dictionary attack password cracking attempts. A previous article of mine can help secure your server against brute-force password cracking attempts.
Make sure your root password is strong — preferably at least 12 characters including capital and lower-case letters, numbers, special characters, and spaces.
Make sure your passwords use Blowfish instead of MD5 or DES.
To check whether your Webserver is infected, try creating a directory whose name starts with a numeral, with a command like:
mkdir 123
If it doesn't work, your system is probably infected.
(source info quote credits go to Tech Pro TechRepublic)
polonus (volunteer 3rd party cold reconnaissance website security analyst and website error-hunter)
-
polonus, thanks for your answer.
I don't mean no disrespect, and I truly appreciate your commitment here. But you should really read what is written before writing down your "consulting" recommendations.
The site was flagged WITHOUT being compromised, what I need is best-practices regarding Avast algorithmic flagging, not regarding how to secure a server.
My understanding by this thread, is that Avast doesn't want to recognize that their classification algorithm has some serious problems. (in the end it must be cheaper to fix by hand what the algorithm mistakenly flags, than to manually vet every flagged vulnerability).
Thanks,
G.-
-
Hi guillermow,
I do not like to talk cross-purposes. With general recommendations I just report what is found during 3rd party cold reconnaissance scanning and this according to my 12 years of experience in doing so. Whenever I stumble upon retirable libraries, oudated code, code errors etc. I give D- and F-status grades for the record. What you do with such "pointers" is up to you and/or your hoster/CDN etc.
Where avast detection is concerned, that is "their part of the bargain" and depending upon detection as remained by avast team. I am just a volunteer with relevant knowledge on the official avast support forums, I cannot take any responsibility for avast detection issues where these are not being actual or depending on third party reports from the avast community or other listings. That is out of my scope. I highly respect avast team member's comments and reactions here. I guess they are glad with you reporting issues to them and also glad with what I am trying to do informing on the stand of (in)security of your website.
Have a nice day, vaya con Dios,
Damian aka polonus
-
My understanding by this thread, is that Avast doesn't want to recognize that their classification algorithm has some serious problems.
I am sorry that this is your understanding, it certainly isn't the case. We recognize that there may be false positives, both in our automatic systems and in manual analyses.
Unfortunatelly, there are so many automatic systems that there is no silver bullet. Most URLs are blocked becuase they are serving content which we deem malicious, but there are many other systems that block according to very specific rules. Also we have many automatic unblocking systems, but as your domain only had a couple of visitors during the weekend, it was not even considered (certain traffic is needed to enter the algorithm).
-
Also we have many automatic unblocking systems, but as your domain only had a couple of visitors during the weekend, it was not even considered (certain traffic is needed to enter the algorithm).
Interesting, thanks Honza, good to know.
-
Also we have many automatic unblocking systems, but as your domain only had a couple of visitors during the weekend, it was not even considered (certain traffic is needed to enter the algorithm).
Interesting, thanks Honza, good to know.
Just to be clear - we have a couple of automatic unblocking systems, but the one that could unblock this particular domain needed bigger traffic.
-
Also we have many automatic unblocking systems, but as your domain only had a couple of visitors during the weekend, it was not even considered (certain traffic is needed to enter the algorithm).
Interesting, thanks Honza, good to know.
Just to be clear - we have a couple of automatic unblocking systems, but the one that could unblock this particular domain needed bigger traffic.
OK, roger that. Cheers
-
@HonzaZ, thanks for your answer.
My original question was with the purpose of understanding why the content we served was deemed malicious and how to prevent it in the future (best-practices).
We were certainly not serving malicious content, since it was only testing htmls and images.
Is there any kind of information you can share about this?
Thanks again
-
Hi,
what I was trying to say (apparently unsuccessfully) was that there are many independent algorithms which block URLs based on many factors. Even if I told you each algorithm in detail (which would take at least a two day seminar), it can be changed in the future anyway, so there is no incentive to do that.
Just as polonus said, the best way to avoid detection is to "behave normally", have no vulnerabilities, serve no malicious code etc. But even then there can be false positives.