In case of bots and frauds, we all know it can be farmful for a particular website now question comes we can detect it so basically we can make a particular firewall about a particular behaviour but if a bot is following a learning pattern it gets failed after a certain time
in the situation machine learning comes and saves us from this harmful bots we all know bot follow a certain pattern for example a human can not visit a whole website in 1 second or a human will not open a particular domain 100 times in an hour
so now these questions leads us to a important theory that we can create a dataset with these particular patters and analyis it with statistics
in this case IBM bot detection dataset saves a whole lot of time of our and provide us a fully created dataset for it
the notebook contains the same dataset and statical analysis of bots plus a model which can detect bots easily
In case of detection difficulties dataset biasness and precision recall matters a lot it is a classification problem where we need to classfy a particular visiter as a bot or regular customer