Archive for the 'data science' Category

In 2017 a research group at the University of Washington did a study on the Black Lives Matter movement on Twitter. They constructed what they call a “shared audience graph” to analyse the different groups of audiences participating in the debate, and found an alignment of the groups with the political left and political right, as well as clear alignments with groups participating in other debates, like environmental issues, abortion issues and so on. In simple terms, someone who is pro-environment, pro-abortion, left-leaning, is also supportive of the Black Lives Matter movement, and viceversa.

F: Ok, this seems to make sense, right? But… I suspect there is more to this story?

So far, yes…. What they did not expect to find, though, was a pervasive network of Russian accounts participating in the debate, which turned out to be orchestrated by the Internet Research Agency, the not-so-secret Russian secret service agency of internet black ops. The same connected with the US election and Brexit referendum, allegedly. 

F: Are we talking about actual spies? Where are you going with this?

Basically, the Russian accounts (part of them human and part of them bots) were infiltrating all aspects of the debate, both on the left and on the right side, and always taking the most extreme stances on any particular aspect of the debate. The aim was to radicalise the conversation, to make it more and more extreme, in a tactic of divide-and-conquer: turn the population against itself in an online civil war, push for policies that normally would be considered too extreme (for instance, give tanks to the police to control riots, force a curfew, try to ban Muslims from your country). Chaos and unrest have repercussions on international trade and relations, and can align to foreign interests.

F: It seems like a pretty indirect and convoluted way of influencing a foreign power…

You might think so, but you are forgetting social media. This sort of operation is directly exploiting a core feature of internet social media platforms. And that feature, I am afraid, is recommender systems.

F: Whoa. Let’s take a step back. Let’s recap the general features of recommender systems, so we are on the same page. 

The main purpose of recommender systems is to recommend people the same items similar people show an interest in.
Let’s think about books and readers. The general idea is to find a way to predict the best book to the best reader. Amazon is doing it, Netflix is doing it, probably the bookstore down the road does that too, just on a smaller scale.
Some of the most common methods to implement recommender systems, use concepts such as cosine/correlation similarity, matrix factorization, neural autoencoders and sequence predictors.

The major issue of recommender systems is in their validation. Even though validation occurs in a way that is similar to many machine learning methods, one should recommend a set of items first (in production) and measure the efficacy of such a recommendation. But, recommending is already altering the entire scenario, a bit in the flavour of the Heisenberg principle of uncertainty

F: In the attention economy, the business model is to monetise the time the user spends on a platform, by showing them ads. Recommender systems are crucial for this purpose.
Chiara, you are saying that these algorithms have effects that are problematic?

As you say, recommender systems exist because the business model of social media platforms is to monetise attention. The most effective way to keep users’ attention is to show them stuff they could show an interest in.
In order to do that, one must segment the audience to find the best content for each user. But then, for each user, how do you keep them engaged, and make them consume more content? 

F: You’re going to say the word “filter bubble” very soon.

Spot on. To keep the user on the platform, you start by showing them content that they are interested in, and that agrees with their opinion. 

But that is not all. How many videos of the same stuff can you watch, how many articles can you read? You must also escalate the content that the user sees, increasing the wow factor. The content goes from mild to extreme (conspiracy theories, hate speech etc).

The recommended content pushes the user opinion towards more extreme stances. It is hard to see from inside the bubble, but a simple experiment will show it. If you continue to click the first recommended video on YouTube, and you follow the chain of first recommended videos, soon you will find yourself watching stuff you’d never have actively looked for, like conspiracy theories, or alt-right propaganda (or pranks that get progressively more cruel, videos by people committing suicide, and so on).

F: So you are saying that this is not an accident: is this the basis of the optimisation of the recommender system? 

Yes, and it’s very effective. But obviously there are consequences. 

F: And I’m guessing they are not good. 

The collective result of single users being pushed toward more radical stances is a radicalisation of the whole conversation, the disappearance of nuances in the argument, the trivialisation of complex issues. For example, the Brexit debate in 2016 was about trade deals and custom unions, and now it is about remain vs no deal, with almost nothing in between. 

F: Yes, the conversation is getting stupider. Is this just a giant accident? Just a sensible system that got out of control?

Yes and no. Recommender systems originate as a tool for boosting commercial revenue, by selling more products. But applied to social media, they have caused an aberration: the recommendation of information, which leads to the so-called filter bubbles, the rise of fake news and disinformation, and the manipulation of the masses. 

There is an intense debate in the scientific community about the polarising effects of the internet and social media on the population. An example of such study is a paper by Johnson et al. It predicts that whether and how a population becomes polarised is dictated by the nature of the underlying competition, rather than the validity of the information that individuals receive or their online bubbles

F: I would like to stress on this finding. This is really f*cked up. Polarisation is not caused by the particular subject nor the way a debate is conducted. But by how legitimate the information seems to the single person. Which means that if I find a way to convince the single individuals about something, I will be in fact manipulating the debate at a community scale or, in some cases, globally!
Oh my god we seem to be so f*cked.

Take for instance the people who believe that the Earth is flat. Or the time it took people to recognise global warming as scientific, despite the fact that, the threshold for scientific confirmation was reached decades ago.

F: So, recommender systems let loose on social media platforms amplify controversy and conflict, and fringe opinions. I know I’m not going to like the answer, but I’m going to ask the question anyway.
This is all just an innocent mistake, right? 

Last year, the European Data Protection Supervisor has published a report on online manipulation at scale. 

F: That does not sound good.

The online digital ecosystem has connected people across the world with over 50% of the population on the Internet, albeit very unevenly in terms of geography, wealth and gender. The initial optimism about the potential of internet tools and social media for civic engagement has given way to concern that people are being manipulated. This happens through the combination of constant harvesting of often intimate information about them, and the control over the information they see online according to the category they are put into (so called segmentation of the audience). Arguably since 2016, but probably before, mass manipulation at scale has occurred during democratic elections. By using algorithms to game recommender systems, among other things, to spread misinformation. Remember Cambridge Analytica? 

F: I remember. I wish I didn’t. But why does it work? Are we so easy to manipulate? 

An interesting point is this. When one receives information collectively, as for example from the television news, it is far less likely that she develops extreme views (like, the Earth is flat), because she would base the discourse on a common understanding of reality. And people call out each other’s bulls*it. 

F: Fair enough.

But when one receives information singularly, like what happens via a recommender system through micro-targeting, then reality has a different manifestation for each audience member, with no common ground. It is far more likely to adopt extreme views, because there is no way to fact check, and because the news feel personal. In fact, they tailor such news are to the users to push their buttons.
Francesco, if you show me George Clooney shirtless and holding a puppy, and George tells me that the Earth is flat, I might have doubts for a minute. Too personal? 

F: That’s good to know about you. I’m more of a cat person. But, experts keep saying that we are moving towards personalisation of everything. While this makes sense for things like personalised medicine, it probably is not that beneficial with many other kinds of recommendations. Especially not the news.
But social media feeds are extremely personalised. What can we do? 

Solutions have focused on transparency measures, exposing the source of information while neglecting the accountability of players in the ecosystem who profit from harmful behaviour. But these are band aids on bullet wounds.
The problem is the social media platforms. In October 2019 Zuckerberg was in front of congress again, because Facebook refuses to fact-check political advertisements, in 2019, after everything that’s happened. At the same time market concentration and the rise of platform dominance threatens media pluralism. This in turn, is leading to repeat and amplify a handful of news pieces and to silence independent journalism. 

F: When I think of a recommender system, I think of Netflix.

  • You liked this kind of show in the past, so here are more shows of the same genre
  • People like you have liked this other type of show. Hence, here it is for your consideration

This seems relatively benign. Although, if you think some more, you realise that this mechanism will prevent you from actually discovering anything new. It just gives you more of what you are likely to like. But one would not think that this would have world-changing consequences. 
If you think of the news, this mechanism becomes lethal: in the mildest form – which is already bad – you will only hear opinions that already align with those of your own peer group. In the worst scenario, you will not hear some news at all, or you will hear a misleading or false version of the news, and you don’t even know that a different version exists.

In the Brexit referendum, misleading or false content (like the famous NHS money that supposedly was going to the EU instead) has been amplified in filter bubbles. Each bubble of people was essentially understanding a different version of the same issue. Brexit was a million different things, depending on your social media feeds.
And of course, there are malicious players in the game, like the russian Internet Research Agency and Cambridge Analytica, who actively exploited this features in order to swing the vote. 

F: Even the traditional media is starting to adopt recommender systems for the news content. This seems like a very bad idea, after all. Is there any other scenario in which recommender systems are not great? 

Researchers use recommender systems in a variety of applications.
For instance, in the job market. A recommender system limits exposure to certain information about jobs on the basis of the person’s gender or inferred health status, and therefore it perpetuates discriminatory attitudes and practices. In the US, researchers use recommender systems to calculate the bail fee for people who have been arrested, disproportionately penalising people of colour. This has to do with the training of the algorithm. In an already unequal system (where for instance there are few women in top managerial positions, and more African-Americans in jail than white Americans) a recommender system will by design amplify such inequality. 

F: Recommender systems are part of the problem, and they make everything worse. But the origin of the problem lies somewhere else, I suspect. 

Yep. The problem with recommender systems goes even deeper. I would rather connect it to the problem of privacy. A recommender system only works if it knows its audience. They are so powerful, because they know everything about us. 
We don’t have any privacy anymore. Online players know exactly who we are, our lives are transparent to both corporations and governments. For an excellent analysis of this, read Snowden’s book “Permanent Record”. I highly recommend it. 

F: The pun was intended wasn’t it?

With all this information about us, we are put into “categories” for specific purposes: selling us products, influencing our vote. They target us with ads aimed at our specific category, and this generates more discussion and more content on our social media. Recommender systems amplify the targeting by design. They would be much less effective, and much less dangerous, in a world where our lives are private. 

F: Social media platforms base their whole business model in “knowing us”. The business model itself is problematic. 

As we said in the previous episode, the internet has become centralised, with a handful of platforms controlling most of the traffic. In some countries like Myanmar, internet access itself is provided and controlled by Facebook. 

F: Chiara, where’s Myanmar?

In South-East Asia, between India and Thailand.
In effect, the forum for public discourse and the available space for freedom of speech is now bounded by the profit motives of powerful private companies. Due to technical complexity or on the grounds of commercial secrecy, such companies decline to explain how decisions are made. Mostly, they make decisions via recommender algorithms, which amplify bias and segregation. And at the same time the few major platforms with their extraordinary reach offer an easy target for people seeking to use the system for malicious ends. 

Conclusion

This is our call to all data scientists out there. Be aware of personalisation in building recommender systems. Personalising is not always beneficial. There are a few cases where it is, e.g. medicine, genetics, drug discovery. Many other cases where it is detrimental e.g. news, consumer products/services, opinions.
Personalisation by algorithm, and in particular of the news, leads to a fragmentation of reality that undermines democracy. Collectively we need to push for reigning in targeted advertising, and the path to this leads to more strict rules on privacy. As long as we are completely transparent to commercial and governmental players, like we are today, we are vulnerable to lies, misdirection and manipulation.
As Christopher Wylie (the Cambridge Analytica whistleblower) eloquently said, it’s like going on a date, where you know nothing about the other person, but they know absolutely everything about you.
We are left without agency, and without real choice.
In other words, we are f*cked

References

Black  lives matter / Internet Research Agency (IRA) articles: 

http://faculty.washington.edu/kstarbi/Stewart_Starbird_Drawing_the_Lines_of_Contention-final.pdf

https://medium.com/s/story/the-trolls-within-how-russian-information-operations-infiltrated-online-communities-691fb969b9e4

https://medium.com/s/story/the-trolls-within-how-russian-information-operations-infiltrated-online-communities-691fb969b9e4

https://faculty.washington.edu/kstarbi/BLM-IRA-Camera-Ready.pdf

IRA tactics:
https://int.nyt.com/data/documenthelper/533-read-report-internet-research-agency/7871ea6d5b7bedafbf19/optimized/full.pdf#page=1

https://int.nyt.com/data/documenthelper/534-oxford-russia-internet-research-agency/c6588b4a7b940c551c38/optimized/full.pdf#page=1

EDPS report
https://edps.europa.eu/sites/edp/files/publication/18-03-19_online_manipulation_en.pdf

Johnson et al.  “Population polarization dynamics and next-generation social media algorithms” https://arxiv.org/abs/1712.06009

Read Full Post »

Chamath Palihapitiya, former Vice President of User Growth at Facebook, was giving a talk at Stanford University, when he said this:
“I feel tremendous guilt. The short-term, dopamine-driven feedback loops that we have created are destroying how society works ”.

He was referring to how social media platforms leverage our neurological build-up in the same way slot machines and cocaine do, to keep us using their products as much as possible. They turn us into addicts.

 

F: how many times do you check your Facebook in a day?

I am not a fan of Facebook. I do not have it on my phone.  Still, I check it in the morning on my laptop, and maybe twice more per day. I have a trick though: I do not scroll down. I only check the top bar to see if someone has invited me to an event, or contacted me directly. But from time to time, this resolution of mine slips, and I catch myself scrolling down, without even realising it!

 

F: is it the first thing you check when you wake up?

No because usually I have a message from you!! :) But yes, while I have my coffee I do a sweep on Facebook and twitter and maybe Instagram, plus the news.

 

F: Check how much time you spend on Facebook

And then sum it up to your email, twitter, reddit, youtube, instagram, etc. (all viable channels for ads to reach you)

We have an answer. More on that later. 
Clearly in this episode there is some form of addiction we would like to talk about. So let’s start from the beginning: how does addiction work?

Dopamine is a hormone produced by our body, and in the brain it works as a neurotransmitter, a chemical that neurons use to transmit signals to each other. One of the main functions of dopamine is to shape the “reward-motivated behaviour”: this is the way our brain learns through association, positive reinforcement, incentives, and positively-valenced emotions, in particular, pleasure. In other words, it makes our brain desire more of the things that make us feel good. These things can be for example good food, sex, and crucially, good social interactions, like hugging your friends or your baby, or having a laugh together. Because we are evolved to be social animals with complex social structures, successful social interactions are an evolutionary advantage, and therefore they trigger dopamine release in our brain, which makes us feel good, and reinforces the association between the action and the reward. This feeling motivates us to repeat the behaviour.

 

F: now that you mention reinforcement, I recall that this mechanism is so powerful and effective that in fact we have been inspired by nature and replicated it in-silico with reinforcement learning. The idea is to motivate (and eventually create an addictive pattern) an agent to follow what is called the optimal policy by giving it positive rewards or punishing it when things don’t go the way we planned. 

In our brain, every time an action produces a reward, the connection between action and reward becomes stronger. Through reinforcement, a baby learns to distinguish a cat from a dog, or that fire hurts (that was me).

 

F: and so this means that all the social interactions people get from social media platforms are in fact doing the same, right? 

Yes, but with a difference: smartphones in our pockets keep us connected to an unlimited reserve of constant social interactions. This constant flux of notifications - the rewards - flood our brain with dopamine. The mechanism of reinforcement can spin out of control. The reward pathways in our brain can malfunction, and this leads to addiction. 

 

F: you are saying that social media has LITERALLY the effect of a drug? 

Yes. In fact, social media platforms are DESIGNED to exploit the rewards systems in our brain. They are designed to work like a drug.
Have you been to a casino and played roulette or the slot machines? 

 

F: ...maybe?

Why is it fun to play roulette? The fun comes from the WAIT before the reward. You put a chip on a number, you don’t know how it’s going to go. You wait for the ball to spin, you get excited. And from time to time, BAM! Your number comes out. Now, compare this with posting something on facebook. You write a message into the void, wait…. And then the LIKES start coming in. 

 

F:  yeah i find that familiar... 

Contrary to the casino, social media platforms do not want our money, in fact they are free. What they want is, and what we are buying into with, is our time. Because the longer we stay on, the longer they can show us ads, and the more money advertisers can pay them. This is no accident, this is the business model. But asking for our time out loud would not work, we would probably not consciously give it to them. So, like a casino, they make it hard for us to get off, once we are on: they make us crave the likes, the right-swipes, the retweets, the subscriptions. So we check in, we stay on, we keep scrolling, because we hope to get those rewards. The short-term satisfaction of getting a “like” is a little boost of dopamine in our brain. We get used to it, and we want more. 

 

F: a lot of machine learning is also being deployed to amplify this form of addiction and make it.... Well more addictive :) But the question is: how such powerful ads and scenarios are so effective because of the algorithms and how much just because humans are just wired to obey such dynamics? My question is: are we essentially flawed or are these algorithms truly powerful? 

It is not a flaw, it’s a feature. The way our brain has evolved has been in response to very specific needs. In particular for this conversation, our brain is wired to favour social interactions, because it is an evolutionary advantage. These algorithms exploit these features of the brain on purpose, they are designed to exploit them. 

 

F: I believe so, but I also believe that the human brain is a powerful machine, so it should be able to predict what satisfaction it can get from social media. So how does it happen that we become addicted?

An example of optimisation strategy that social media platforms use is based on the principle of “reward prediction error coding”. Our brain learns to find patterns in data - this is a basic survival skill - and therefore learns when to expect a reward for a given set of actions. I eat cake, therefore I am happy. Every time. 
Imagine a scenario, where we have learnt through experience that when we play slot machines in a casino, we learn that we win some money once every 100 times we pull the lever. The difference between predicted and received rewards is a known, fixed quantity. If so, just after winning once, we have almost zero incentive to play again. So the casino fixes the slot machines, to introduce a random element to the timing of the reward. Suddenly our prediction error increases substantially. In this margin of error, in the time between the action (pull the lever) and the reward (maybe) our brain has time to make us anticipate the result and make us excited at the possibility, and this releases dopamine. Playing in itself becomes a reward.

F: There is an equivalent in reinforcement learning called the grid world which consists in a mouse getting to the cheese in a maze. In reinforcement learning, everything works smooth as long as the cheese stays in the same place.

Exactly! Now social media apps implement an equivalent trick, called “variable reward schedules”.

In our brain, after an action we get a reward or punishment, and we generate positive or negative feedback to that action.
Social media apps optimise their algorithms for the ideal balance of negative and positive feedback  in our brains caused by the difference between these predicted and received rewards. 

If we perceive a reward to be delivered at random, and - crucially - if checking for the reward comes at little cost, like opening the Facebook app, we end up checking for rewards all the time. Every time we are just a little bit bored, without even thinking, we check the app. The Facebook reward system (the schedule and triggers of notification and likes) has been optimised to maximise this behaviour. 

 

F: are you saying that buffering some likes and then finding the right moment to show them to the user can make the user crave for reward? 

Oh yes. Instagram will withhold likes for a period of time, causing a dip in reward compared to the expected level. It will then deliver them later in larger bundles, thus boosting the reward above the expected value, which trigger extra dopamine release, which sends us on a high akin to a cocaine hit.

 

F: Dear audience, do you remember my question? How much time do each of you spend on social media (or similar) in a day? And why do we still do it?

The fundamental feature here is how little is the perceived cost to check for the reward: I just need to open the app. We perceive this cost to be minimal, so we don’t even think about it. YouTube for instance had the autoplay feature, so you need to do absolutely nothing to remain on the app. But the cost is cumulative over time, it becomes hours in our day, days in a month, years in our lives!! 2 hours of social media per day amounts to 1 month per year. 

 

F: But it’s so EASY, it has become so natural to use social media for everything. To use Google for everything.

The convenience that the platforms give us is one of the most dangerous things about them, and not only for our individual life. The convenience of reaching so many users, together with the business model of monetising attention is one of the causes of the centralisation of the internet, i.e. the fact a few giant platforms control most of the internet traffic. Revenue from ads is concentrated on big platforms, and content creators have no other choice but to use them, if they want to be competitive. The internet went from looking like a distributed network to a centralised network. And this in turn causes data to be centralised, in a self-reinforcing loop. Most of human conversations and interactions pass through the servers of a handful of private corporations.

Conclusion

As Data scientists we should be aware of this (and we think mostly we are). We should also be ethically responsible. I think that being a data scientist no longer has a neutral connotation. Algorithms have this huge power of manipulating human behaviour, and let’s be honest, we are the only ones who really understand how they work. So we have a responsibility here. 

There are some organisations, like Data For Democracy for example, who are advocating for something equivalent to the Hippocratic Oath for data scientists. Do no harm.  

 

References

Dopamine reward prediction error coding https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4826767/

Dopamine, Smartphones & You: A battle for your time http://sitn.hms.harvard.edu/flash/2018/dopamine-smartphones-battle-time/

Reward system https://en.wikipedia.org/wiki/Reward_system

Data for democracy datafordemocracy.org

Read Full Post »

In this episode I speak with Jon Krohn, author of Deeplearning Illustrated a book that makes deep learning easier to grasp. 
We also talk about some important guidelines to take into account whenever you implement a deep learning model, how to deal with bias in machine learning used to match jobs to candidates and the future of AI. 
 
 
You can purchase the book from informit.com/dsathome with code DSATHOME and get 40% off books/eBooks and 60% off video training

Read Full Post »

Join the discussion on our Discord server

 

In this episode, I am with Aaron Gokaslan, computer vision researcher, AI Resident at Facebook AI Research. Aaron is the author of OpenGPT-2, a parallel NLP model to the most discussed version that OpenAI decided not to release because too accurate to be published.

We discuss about image-to-image translation, the dangers of the GPT-2 model and the future of AI.
Moreover, 
Aaron provides some very interesting links and demos that will blow your mind!

Enjoy the show! 

References

Multimodal image to image translation (not all mentioned in the podcast but recommended by Aaron)

Pix2Pix: 
 
CycleGAN:
 

GANimorph

 

Read Full Post »

Join the discussion on our Discord server

 

After reinforcement learning agents doing great at playing Atari video games, Alpha Go, doing financial trading, dealing with language modeling, let me tell you the real story here.
In this episode I want to shine some light on reinforcement learning (RL) and the limitations that every practitioner should consider before taking certain directions. RL seems to work so well! What is wrong with it?

 

Are you a listener of Data Science at Home podcast?
A reader of the Amethix Blog? 
Or did you subscribe to the Artificial Intelligence at your fingertips newsletter?
In any case let’s stay in touch! 
https://amethix.com/survey/

 

 

References

Read Full Post »

Join the discussion on our Discord server

 

In this episode I have an amazing conversation with Jimmy Soni and Rob Goodman, authors of “A mind at play”, a book entirely dedicated to the life and achievements of Claude Shannon. Claude Shannon does not need any introduction. But for those who need a refresh, Shannon is the inventor of the information age

Have you heard of binary code, entropy in information theory, data compression theory (the stuff behind mp3, mpg, zip, etc.), error correcting codes (the stuff that makes your RAM work well), n-grams, block ciphers, the beta distribution, the uncertainty coefficient?

All that stuff has been invented by Claude Shannon :) 

 
Articles: 
 
Claude's papers:
 
A mind at play (book links): 

Read Full Post »

Join the discussion on our Discord server

As ML plays a more and more relevant role in many domains of everyday life, it’s quite obvious to see more and more attacks to ML systems. In this episode we talk about the most popular attacks against machine learning systems and some mitigations designed by researchers Ambra Demontis and Marco Melis, from the University of Cagliari (Italy). The guests are also the authors of SecML, an open-source Python library for the security evaluation of Machine Learning (ML) algorithms. Both Ambra and Marco are members of research group PRAlab, under the supervision of Prof. Fabio Roli.
 

SecML Contributors

Marco Melis (Ph.D Student, Project Maintainer, https://www.linkedin.com/in/melismarco/)
Ambra Demontis (Postdoc, https://pralab.diee.unica.it/it/AmbraDemontis) 
Maura Pintor (Ph.D Student, https://it.linkedin.com/in/maura-pintor)
Battista Biggio (Assistant Professor, https://pralab.diee.unica.it/it/BattistaBiggio)

References

SecML: an open-source Python library for the security evaluation of Machine Learning (ML) algorithms https://secml.gitlab.io/.

Demontis et al., “Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks,” presented at the 28th USENIX Security Symposium (USENIX Security 19), 2019, pp. 321–338. https://www.usenix.org/conference/usenixsecurity19/presentation/demontis

W. Koh and P. Liang, “Understanding Black-box Predictions via Influence Functions,” in International Conference on Machine Learning (ICML), 2017. https://arxiv.org/abs/1703.04730

Melis, A. Demontis, B. Biggio, G. Brown, G. Fumera, and F. Roli, “Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid,” in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 751–759. https://arxiv.org/abs/1708.06939

Biggio and F. Roli, “Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning,” Pattern Recognition, vol. 84, pp. 317–331, 2018. https://arxiv.org/abs/1712.03141

Biggio et al., “Evasion attacks against machine learning at test time,” in Machine Learning and Knowledge Discovery in Databases (ECML PKDD), Part III, 2013, vol. 8190, pp. 387–402. https://arxiv.org/abs/1708.06131

Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in 29th Int’l Conf. on Machine Learning, 2012, pp. 1807–1814. https://arxiv.org/abs/1206.6389

Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial classification,” in Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Seattle, 2004, pp. 99–108. https://dl.acm.org/citation.cfm?id=1014066

Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017. https://arxiv.org/abs/1703.01365 

Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Model-agnostic interpretability of machine learning." arXiv preprint arXiv:1606.05386 (2016). https://arxiv.org/abs/1606.05386

Guo, Wenbo, et al. "Lemna: Explaining deep learning based security applications." Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018. https://dl.acm.org/citation.cfm?id=3243792

Bach, Sebastian, et al. "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation." PloS one 10.7 (2015): E0130140. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140 

Read Full Post »

Join the discussion on our Discord server

Scaling technology and business processes are not equal. Since the beginning of the enterprise technology, scaling software has been a difficult task to get right inside large organisations. When it comes to Artificial Intelligence and Machine Learning, it becomes vastly more complicated. 

In this episode I propose a framework - in five pillars - for the business side of artificial intelligence.

 

Read Full Post »

Join the discussion on our Discord server

In this episode, I am with Aaron Gokaslan, computer vision researcher, AI Resident at Facebook AI Research. Aaron is the author of OpenGPT-2, a parallel NLP model to the most discussed version that OpenAI decided not to release because too accurate to be published.

We discuss about image-to-image translation, the dangers of the GPT-2 model and the future of AI.
Moreover, 
Aaron provides some very interesting links and demos that will blow your mind!

Enjoy the show! 

References

Multimodal image to image translation (not all mentioned in the podcast but recommended by Aaron)

Pix2Pix: 
 
CycleGAN:
 

GANimorph

 

Read Full Post »

Join the discussion on our Discord server

Training neural networks faster usually involves the usage of powerful GPUs. In this episode I explain an interesting method from a group of researchers from Google Brain, who can train neural networks faster by squeezing the hardware to their needs and making the training pipeline more dense.

Enjoy the show!

 

References

Faster Neural Network Training with Data Echoing
https://arxiv.org/abs/1907.05550

Read Full Post »

Join the discussion on our Discord server

In this episode I explain how a research group from the University of Lubeck dominated the curse of dimensionality for the generation of large medical images with GANs.
The problem is not as trivial as it seems. Many researchers have failed in generating large images with GANs before. One interesting application of such approach is in medicine for the generation of CT and X-ray images.
Enjoy the show!

 

References

Multi-scale GANs for Memory-efficient Generation of High Resolution Medical Images https://arxiv.org/abs/1907.01376

Read Full Post »

In this episode I am with Jadiel de Armas, senior software engineer at Disney and author of Videflow, a Python framework that facilitates the quick development of complex video analysis applications and other series-processing based applications in a multiprocessing environment. 

I have inspected the videoflow repo on Github and some of the capabilities of this framework and I must say that it’s really interesting. Jadiel is going to tell us a lot more than what you can read from Github 

 

References

Videflow Github official repository
https://github.com/videoflow/videoflow

 

Read Full Post »

In this episode, I am with Dr. Charles Martin from Calculation Consulting a machine learning and data science consulting company based in San Francisco. We speak about the nuts and bolts of deep neural networks and some impressive findings about the way they work. 

The questions that Charles answers in the show are essentially two:

  1. Why is regularisation in deep learning seemingly quite different than regularisation in other areas on ML?

  2. How can we dominate DNN in a theoretically principled way?

 

References 

Read Full Post »

In this episode I explain how a community detection algorithm known as Markov clustering can be constructed by combining simple concepts like random walks, graphs, similarity matrix. Moreover, I highlight how one can build a similarity graph and then run a community detection algorithm on such graph to find clusters in tabular data.

You can find a simple hands-on code snippet to play with on the Amethix Blog 

Enjoy the show! 

 

References

[1] S. Fortunato, “Community detection in graphs”, Physics Reports, volume 486, issues 3-5, pages 75-174, February 2010.

[2] Z. Yang, et al., “A Comparative Analysis of Community Detection Algorithms on Artificial Networks”, Scientific Reports volume 6, Article number: 30750 (2016)

[3] S. Dongen, “A cluster algorithm for graphs”, Technical Report, CWI (Centre for Mathematics and Computer Science) Amsterdam, The Netherlands, 2000.

[4] A. J. Enright, et al., “An efficient algorithm for large-scale detection of protein families”, Nucleic Acids Research, volume 30, issue 7, pages 1575-1584, 2002.

Read Full Post »

Training neural networks faster usually involves the usage of powerful GPUs. In this episode I explain an interesting method from a group of researchers from Google Brain, who can train neural networks faster by squeezing the hardware to their needs and making the training pipeline more dense.

Enjoy the show!

 

References

Faster Neural Network Training with Data Echoing
https://arxiv.org/abs/1907.05550

Read Full Post »