Well, the first reason seems like nonsense. I admit to sometimes taking that position myself, but I'm used to working in languages that have more powerful abstraction capabilities than Ruby (and so it is easier to shape programs to match human thought). And even there, it is almost always useful to put in some comments as guides. The Rails/Agile community seems to put a lot of thought into matching up user-visible structure, program structure, data structure, and the natural language ways to describe them, but their tools are really pretty crude and don't work in exactly the places where you need guidance.
For example, I ran across a method called sanitize_structure. Obviously this was supposed to take some representation of a (molecule) structure and produce a new one that had certain features removed, but what were they? There was no easy way to tell. And that's because this operation is not self-evident, it depends on domain knowledge that is not readily captured in standard Rails apparatus.
The second reason also seems pretty nonsensical. Yes comments may become outdated, but eliminating them entirely seems like having your legs amputated to get rid of toe fungus.
Now, this group works much more collaboratively than most places I've been involved with. And that means that perhaps code doesn't need to be as self-explanatory as I'm used to – instead, you are encouraged to go talk to people and get them to explain it to you. Talking is good, although it seems both inefficient and risky as a way to transmit software design information.
But I've really been missing the presence of English language in the code. I've tried addressing this technically – I found an Emacs package that lets me insert annotations into the code that do not get saved in the file but instead live in a side-channel. This means at least I can make my own notes and read them later, although nobody else sees them. And commit messages from source control are also a valuable source of human-scale explanations and design rationales.
A lot of agile seems to involve this kind of over-reaction to very real problems of earlier models of software development. But I guess this kind of over-reaction is fundamental to how any new thing establishes itself.
[to go up a few meta levels – I realized recently that the whole point of most of my work, including the commercial, research, and philosophical aspects of it, might be summed up as exploring ways for reconciling computational technology with the way human minds actually work. And given this, I realized that the agile methodology people might have something to teach me in that regard – that is one reason I took this job. I was pretty certain I would find adapting to agile to be a struggle, and it has been, but I think it will be a productive one. ]
[and more meta – I have been thinking about starting a different blog for technical stuff like this – having a technical blog seems to be practically a job requirement these days – but so far can't be bothered. I'll use tags.]