On the Social Need for Systems Thinking

I sit in the bathroom and I stare at my face and I mechanically evaluate whether or not my eyebrows are completely identical in shape. Just for the fact of elegance, and symmetry. And I wish my skin wasn’t so messed up because I know our technology isn’t good enough to completely get rid of scars and it’s a shame every time I look at my face and know that if I’d have had better health care, I wouldn’t have all these scars on my face. If I hadn’t grown up in poverty and thus violence, I wouldn’t have all these scars on my face, and my prospects for improving my future wouldn’t be diminished by the scarlet letter of poverty: where people read my face as “lower class, inconsequential.” I look at my face, and I see systemic problems. And I sit in the bathroom and I think about these things. And I think up all these wonderful anecdotes and arguments and explanations, so brilliant and clean and easy to follow at first.

Would you believe I have a complex model that I am always working on in my mind, that is laid out there quite clearly, and that makes predictions about the world around us and most important, is revised when it’s wrong? Yes, I have one of those. And I sit in the bathroom and I think about the model and I think about how to improve it based on its outcomes and its predictive power. Clearly we want a model with predictive power, at least statistically predictive. But we also want a model that, when we run the time forward, we will find it in a desirable state. The model is both prescriptive and descriptive. It is ontological, epistemological and ethical all at once, so it can serve all our needs as an agent in the world. We want a model that can inform us on how to interact with our environment in a way that produces some state in the future which we desire. Obviously, all parts within it must be consistent (don’t start thinking binary logic consistency! Most phenomena is too complex to be treated that way): the model can’t push impossible things off onto the model holder in order to make sense of the world or accomplish desired goals. It must also work by confirming or disconfirming formulas for action in the world based on the consequences they produce.  The model must include revisions of the prescribed actions based on the outcomes of action-experiments.

Alright, so let’s just be straightforward: the difficult part here is balancing the predictiveness of the model with its ability to create a long-term desirable outcomes. For instance, I can solve my problems with poverty by creating a successful business selling cars (not a bad metaphor for the 1900s), getting a job at Jeep, engineering vehicles or somehow or another contributing to the automotive industry. Fast forward 80 years with a significant percentage of agents in the model acting off of this idea from their own mental models, and what happens? The continuing auto fetish has helped to kill off or begun to kill off life on the planet in huge droves. Alright, so we throw out that option and we look at why we had to throw it out. Why did we have to throw it out?

Yes, it is predictive: people do indeed behave in this way by externalizing costs to produce wealth. But, it is ecologically unfeasible and so it doesn’t meet our criterion of having desirable long term outcomes. Why? Because biospheres are sensitive and changes to them have to be made scientifically. Why? Because those are the rules nature started out with. Why? Well, if you want to know that, study physics: but know there is nothing at all sacred about those rather brutal laws. Alas, they are laws. What we have so far established by asking why is already profound. What we have established is that, from a simple thought experiment by making a model and running it forward and basing it off of a few simple scientific facts that we know (ie, the US is state-capitalist, cars create greenhouse gasses, people persist in using cars, and this is bad for the resilience of the biosphere) we have determined that whole classes of prescribed actions can be cut out of our model. In this case, it is the idea that a proper model can involve prescriptions for action (externalizing costs) that destroy our overall intentions. I have assumed we want a model with desirable outcomes here and I will discuss that later.

Here what destroys the criterion of desirable outcomes is obviously a lack of ecological stability. So anything that is ecologically unstable is not consistent with our requirements for a model, because by running the time parameter forward, we find life has been destroyed. We should do little tests of actions like this for all the actions individuals’ ideologies (and less conscious urges) cause them to undertake by assuming some f(x) or some such thing describes how many people ascribe to this ideology in the model and so perform the given action, and then we should tally up the consequences of these actions and let them play out in the model and see how they affect our desired outcomes. And then, based on the consequences, the actions should be judged as helpful or not helpful in proportion. In this way, we can sort out at least some cases where actions are clearly inconsistent with the requirements of our model, and so we can soundly reject them (there will certainly be areas where our knowledge and empirical evidence is lacking and our conclusion may simply be ‘indeterminate’ for a given action, which is fine, doesn’t break the method I am proposing called “systems thinking,” and really only proves that systems thinking is a good method because some systems are very chaotic and this allows for that).

If we can reject some ideologies necessarily given our premises for the model, then this means there is some solid footing in the world, some objective grounds that we can have the highest degree of confidence in. If you believe in reality, and you can see that life and natural selection are persistent, then it is necessarily true that we can know some things objectively. I contend we can even know some things about ethics with approximate certainty. Is it so absurd to say that our environment gives us a clear foundation upon which to start our reasoning, and that it being in the state that it is in, necessarily brings forth some rules of behavior in the model? Is it absurd to say that in a pin ball machine, one hits the levers, or else one loses the game? So is it so absurd to say that in the world, either one generally aids in destroying life, or one follows certain constraints, such as: the ecosystem remains resilient? Is this so absurd, are we just falling into a trap of deluding ourselves that there is such a thing as reality and that we can know it with some high degree of objectivity at least sometimes? Objections seem humorous here, and I advise anyone with them to try winning a pinball game without hitting any of the levers. Or maybe, try going into a bank and asking nicely for their money. Who knows– if there’s no such thing as an objective reality, then maybe it will work!

All empirical indications show that there is a material reality. There is only one other class of ideas to choose if you choose to reject those of material reality’s existence. Clearly, the choices are well defined to all standards that we can know. You can go off saying that some deity or simulation-maker orders us to discount these empirical observations, but how will you affect reality when your ideology does not allow you to admit that testing reality yields new knowledge, but instead that only god does? Who would have been able to predict that a simple nudge in the opposite direction of recognizing reality, would cause people to feed their children bleach to “cure” health problems (because they have an ideology that operates based on the “some authority told me to do it” principle rather than a principle of recognizing material reality and rationality, in which they could test the world and find out the truth themselves)? Is this deity, supernatural, anti-naturalist model predictive? Does the divine goddess that told you about the parasites in your kids’ gut also know that the bleach will kill them in two months when their intestines fail? Faith does not generally cause something to work. How will you know what will happen tomorrow with such a model? An honest understanding of the things involved makes a plan work, but this other kind of ideology, when followed by people, creates a social model whose most basic cultural assumptions are in denial of reality. The basic argument here is “do not consider reality first, consider god first.” But what about the materials of life? It’s no wonder religious martyrdom is so popular. It’s an exercise in denying the materials of life. What’s worth more in our model, if we run the time forward: people destroying things as they argue over ideologies (no progress can be made without rationality because there is no way to come to terms between conflicting views, as either one god or another must be correct), or people building technologies to determine what is actually correct (an example of the outcomes of a majority of people following the ‘rational’ menu of choices in the model and supposing reality exists even if we don’t know it perfectly)?

So if you want to take the deity option, go ahead. But there’s no stability in that model, no predictive power, and no way to ensure a good future. Everything there is dictated by a deity, and there is no hope at changing that without subjecting your deity to the determinations of empiricism and logic. And at that point one ceases to be religious. Compare this with the nihilist option I namelessly alluded to earlier, in my first assumption: rational people agree that a good requirement for a model of the world and how humans should act in it will require that life is not extinguished in the long run (reaches some ideal state/set of states/trend of states). Someone that rejects the idea of some long term trend or state being preferable to any other long term trend or state in particular, is probably a nihilist, and they fall in with the religious people here. If you ascribe to either of those, you’re ascribing to an ideology that advocates a model that can’t be controlled in any way. This is because only rationality helps us to get better control of the events around us when it is guided by our values or intentions. This is because humans exist in an unstable cusp, and all other life as well, and it’s a cusp against entropy. We are complex and organized systems because we use tactics to manipulate entropy to our own ends– which are determined by values– and this allows us to be organized against the will of entropy. Without science there is no method to affect change, and without values there is no reason to affect change. Embracing religion or nihilism is embracing a future of helplessness.

I know that’s a long string of inferences, but if we accept that life should be around in the long term, then we necessarily accept rationality. There’s no other choice, it’s logically implied. The only other choice is pure denial, or nihilism, or religion, and those are destined to bring about uncontrollable outcomes. What would be the outcome when all people follow religious deities without hope of reason taming the deities? What would be the outcome of even the most optimistic (impossible?) version of the religious model, where all people accept the same religion? Certainly the homogeneity of the system would bring about collapse if we trust any knowledge we’ve gained in the ecological sciences. These facts are directly and deductively traceable from the foundations of the basic laws of nature in this way. What are the other options? An option for a world of nihilists? Here, the problem seems to be the opposite of the homogeneity problem with the religious option. It does seem to me that at least some people might find it desirable to detonate nuclear weapons or release biological agents for pure enjoyment, or to gain power, or in the name of some other nihilistic whim. This does seem quite likely and I would bet a computationally-intensive good enough first-principles model of the earth and the life on it would certainly show that a world of largely nihilists, would end after some finite and in my opinion shortish period, when someone destroys the planet. And we won’t probably have gotten off the planet, because hard work is not usually pleasant and since this particular conjectured world hasn’t much regard for ethics or regard for rationality, no one in the world would be putting forward enough resources for a decent space program. Then again, we cannot feel confident until we get an interdisciplinary team of scientists programming models–  but you could start with NetLogo.

So, resisting these ideologies that produce ecologically unstable behavior, we can see that consumerism is out the window. Wow! We can know things about the world! We don’t have to be plagued by constant arguments about this perspective or that perspective and how “everyone’s opinion is equally valid and deserves respect!” We only sometimes have to deal with those arguments, and only when we haven’t gotten the requisite knowledge yet and thus cannot make a determination on some candidate idea with a high enough degree of certainty. What an improvement! There are a slew of other ideologies we can just check right off the list. That’s because these ideologies would be prone to spread and if a significant amount of people adopted them, they’d result in behavior that just plain rules it out based on the requirements for our models. And this isn’t even that radical: we have laws against dumping arsenic into rivers because a large amount of people seem to understand this would eventually lead to very bad things for us. It isn’t that much different to realize ways of thinking that cause ecological catastrophe should be rejected too. I’ve used the environment a lot because it is our most obvious constraint and I think a lot of people can see that quite easily so it’s an effective example. I’d argue there are many other such constraints. It is further interesting to note that the constraints of our environment give a universality to at least some subset of ethical actions.

Another constraint on our ethics would be the necessity of violence early in the development of life in the universe. This puts an ethical constraint on all of us. Because of the laws of nature, violence is necessary for life to develop (think of the violence necessary for natural selection), and, given that life is persistent in developing, which it seems to be, this implies that, if one were to resist the efforts of life’s progress (and so resist its ability to reduce the unnecessary violence of the laws of nature through logic and technology), then one is necessarily acting in a way that will cause an increase in the aggregate violence of the universe, which seems bad. I’m not aware of any good arguments for severe unnecessary violence other than the nihilist/religious ones I already addressed. So, since causing the destruction of life will mean that life will have to redevelop that level of complexity again, at added cost of more violence needed to develop, this means that ‘opting out’ of making ethical decisions places that cost of violence partially upon ‘opt-out’-person’s head. But the violence argument isn’t the center of this rant, and neither is the environment. The point is that we can and must make models that meet our personal requirements (and there is and should be room for a lot of diversity here!), and that if people generally did so, their behavior would be different than it is presently (most people’s models seem to end when they personally aren’t involved). So the fact that not at least a substantial amount of people are right here with me on this train of thought (and I talk about this with people) means that most people are not inclined toward a modeling, systems-thinking perspective, or they’re nihilists, or they’re religious.

I didn’t know I had a modeling perspective until I took a class on agent-based modeling, but I’d had it all my life. It was nearly inborn with me. I was always making little models to explain the world around me. They were fanciful, of course, sometimes– I didn’t have a lot of evidence collected yet– but it was always something I was doing. I still am always doing it. So when I look in the mirror, I see the model. Because every single part of my life is part of the model and it is all so obviously connected (even if not obviously how) that it’s nearly impossible to do any one thing without being reminded of all these other things. And that is quite exhausting, constantly looking at every person’s actions and seeing their tiny little dx contribution to huge problems. And it seems to me it wouldn’t be so exhausting if other people were taking on a modeling perspective, because they would at least be thinking about the aggregate long-term effects of their actions. In that case, people would probably be acting more responsibly and my life wouldn’t be so stressful; I wouldn’t have so much of the burden placed on me. In that case, let me tell you about some things that would not be happening. If people were taking on a modeling perspective, or a systems-thinking perspective, then they would not be brushing other people’s problems off so easily. I am too often helping people whose problems are obviously consequences of a system, and so when people engage in that system they are partially responsible for the people in the system that the system negatively affects. So if people were taking this kind of perspective, then they would realize that they are partially responsible for other people’s lot in life, sometimes, in a significant way.

When you profit off of other people, you are responsible for them being poor if they are poor because of it. That’s a simple definition and so it is true. So if you are a sexist douchebag like Donald Trump, maybe you’re the one that ought to be giving material resources to help out my friends that are prostituting themselves for drugs because they were treated like shit by their sexist fathers in ways that resulted in disastrous traumas that these people are now suffering with every minute of every day. Maybe if those people had a brain that seemed to work like mine in any way at all, not to sound egotistical, they would realize that maybe some people they have deprived of material resources (even indirectly, for example by helping to further sexist ideas), they should feel responsible for providing some compensation back to those people. If you opt in to a system of taking advantage of women, and you get public, then you are necessarily influencing some people into thinking that is okay, because we can show that humans follow each other and build societal norms. So you’re Donald Trump and you are building those norms, and then some guy is accepting them (and granted the guy is responsible too, another problem with people is they cannot seem to understand more than one cause may be necessary for one effect and vice versa), and then that guy goes home and kicks out his daughter who is 13 and physically abuses her because she isn’t carrying her weight at home or whatever, social darwinism, and all that shit. And then she goes out and gets abused on the street at 13 and has no resources to try to overcome the physical effects in her brain that the trauma has caused. And so Donald Trump should be paying for those resources, not me. I’ve lived in poverty my whole life and devote all my spare time to helping already. How can I single-handedly fix these problems other people are making? I can’t.

Now, it’s important not to conduct a slippery slope fallacy here. Just because I cannot save them all does not mean I should not try to save the ones I can save. That slippery slope rhetoric is no better than someone who says “well, I can’t be perfect, so I might as well steal when no one is looking.” “I can’t be perfect at doing good but I can make a lot of money, that’s what I’m going to do.” Such rhetoric is no different than the guy that makes $20 an hour but works at a grocery store, walking past a bum on the street, thinking “I’ve worked hard and I’m lower class too, what am I supposed to do? I can’t give anything.” Well, that guy is part of the problem too. If we all gave a couple pennies, that houseless person would have a lot of money and we wouldn’t notice the difference (we already do give enough, actually: the government just squanders it on military funding and the like, but that’s another rant). But the bottom line is, unless that guy actively tries to do something instead of give his two pennies (or he gives his two pennies), he’s part of the problem, in essence. His actions may seem understandable, but we cannot just go off of what seems understandable: return to the requirements of our models.

What if all people do what that guy did, and also, like that guy, they don’t try to change the system in some other way instead of give some direct help? Well, then clearly, we have a system which will have homelessness, violence, abuse, etcetera, until it collapses. Well, we can argue about the value of homelessness, if you want to try to sneak in your counter there, but I’m going to go ahead and assume no one is going to disagree that homelessness sucks, because when people are homeless, most of us feel bad, they feel bad, and even if houseless people were so inclined to do something we would like, they can’t, because they’re too busy trying to survive! And what’s more, leaving them on the street incentivizes them to do things that people normally don’t like! So it seems the solution is obvious, that homelessness generally is not going to produce desirable results in the long term, and that working class guy should give the homeless man his two pennies. Well, you say, what if you live in New York City and so you encounter 1000 bums… fill in the blanks… now how would I survive in a system where I lose 10 hours a week getting pennies, and 20 dollars each week? Alright, if that’s what you’re thinking here, then you didn’t get my point about the slippery slope fallacy.

Well have you ever heard the story of the starfish on the beach? It’s hokey and I don’t know where I heard it but it’s really quite good. In essence, a kid is on the beach and starfish have washed ashore by the thousands and are dying and the kid is throwing them back into the ocean. An adult says, “you can’t save them all,” and the kid says, “yeah but I can save this one” and chucks it out to sea. Well, that’s a good story for life because there are very few absolutes in life and we shouldn’t let false dichotomies talk us into abandoning a better world. Just because you can’t give money to all the homeless people doesn’t mean you shouldn’t do it as much as you can or think will be effective. And that’s what this rant is really about, going toward a model where people act organically based on rational principles and some improved state will arrive out of it. This isn’t just about homeless people: systems thinking is about seeing the tiny dx contributions of every single person’s actions, and changing those tiny dx’s to point toward creating a world we all want to live in. Donald Trump obviously owes society a lot and needs to pay up as aforementioned, and all other people of this ilk. But also the problem is far, far deeper than that. The problem is common people need to adopt a modeling perspective, a systems perspective, and they need to begin to see how their actions affect the world at large, in a way so simple as altering a parameter and running the system forward. I can only imagine that, because of this, brain-to-brain interfaces will one day be responsible for instigating the largest bifurcation ever witnessed in evolution.

Keep in mind humans are cultural, and so our individual actions have the propensity to grow into complex systems-within-systems that spiral out of our immediate control. If people saw their actions this way, people in my immediate locale would act differently and it would have the effect of immediate relief upon me. For instance, if my friend’s father would simply accept his guilt for not believing his daughter when his nazi father molested her most of her childhood, then maybe he would provide her with reasonable resources to get better, realizing his part in creating the problem with her mental state. And then I wouldn’t have to be a person with far less than he, stretched thin between all the people in my immediate locale that need my attention, trying to provide help that he is more responsible for than I. Since we cannot help everyone, we must draw the line somewhere in order to survive. I think prioritizing people that organically come to be in your spheres of influence is a good rule, and obviously weighing that against personal upkeep and the need to engage in long-term projects that aren’t aimed at helping indidivuals. And so that means in order to be ethical, based on this modeling perspective, I have to help as much as I can in a way that achieves the desired state/s. So for me that means I have to still be able to do physics and help my friends and eat and sleep and stave off my own depression and all that. But, helping all people in my immediate life and doing physics is too high of a demand. This I know empirically, because a nervous breakdown is hard to argue as subjective, seeing how there are measurable driving increases in stress hormones. In the beginning, I said our model cannot push off impossible expectations on the model-builder.

I personally dislike this area of the problem. I worry about stepping into slippery-slope territory. How similar am I to the guy that says, “I can’t help them all, so fuck it,” when I say, I can’t do physics and help all the people I know that deserve help based on their systemic lack of opportunity? Is it true I can’t? Or am I just not trying hard enough? I want to say, “now here it is, ethics, laid out all nice and neat. We can’t help them all so we choose based on this criteria here….” and as I go on and on creating more criteria for choosing who to help and finding more complications, I get the feeling that my laws are losing elegance. My description of the model is losing elegance, and we don’t like that in math and physics, and those viewpoints are intimately connected with the modeling requirements I immediately set out because I said the model must be predictive. That’s a scientific model. So it does not bode well when I have to continue to patch my model with little patches. First I say, we should all do actions that meet the requirements of our model. Then I say, in order to conform to obvious empirical evidence, that sometimes we can’t possibly do all the actions needed to be ethical, because there are too many. So I patched that up by saying we needed to have some criterion like acting in the areas we are actually organically connected to. Well, then I observed something else: that I couldn’t even act in line with my ethics approximately all the time in my organic spheres of influence because the needs here were so great (the needs are so great for a reason).

So, now I am at a place again where my model seems to need a patch. And now I am exponentially losing faith in the model (on the days that I don’t think the elegance-bias is the most hilarious blunder of all physics and mathematics). And so I investigate, because making another patch seems bad in my scientific intuition. Why do I need to make a patch? Well, it may be because the needs are so great and the help to be given so scarce. Why is it scarce? It’s scarce because few people look at the world as a system and see their part in it, extrapolating about the basic roles of different elements; and few people come to relatively obvious conclusions based on simple testing of this system. But if they did they’d realize that discounting one’s ability to affect a system by behaving differently, because a person doesn’t think they can make much difference, or balance this with personal needs or whatever– well when lot of people make that decision in aggregate, then that means that a lot of change that could be happening is not happening. Stuff happens in increments in the world and you can’t discount the dx’s. It’s a complex system. That’s why trends and fashion and things like that have been so unexplainable and unpredictable for so long. Because one person can do a single action and then the system shifts completely (bifurcation) as, sometimes, people begin to reproduce that thing until it becomes a trend, and so systems like fashion can be really unpredictable. So just making a few actions in the right direction can have a huge impact. So, my model needs to patch at all. Here’s the answer: I can’t help all the people I am organically connected to because the need is so great, and the need is so great because few  of the people with resources in my social group are bothering to engage in systems thinking that would lead them to doing their small dx part to help. If everyone followed a systems approach, we could make the world a more just place by doing the small things we could and being conscientious on a daily basis, while being completely in charge of our own lives.

I’m not even saying this in a good light. It is what it is, whether we like it or not, and it means that doing ‘good’ will have some effects and doing ‘bad’ will have some effects; bottom line is, some small actions have huge effects. So trying to convince oneself that one’s actions don’t have effects, and thus we aren’t at all responsible for other people’s problems in the world– that is the root of the problem. It’s a lack of systems thinking. And also it’s a lack of understanding of how cause and effect work in complex systems. A person must be able to understand that a phenomenon can have many causes that go into creating and sustaining it, and that even though you or I are not very responsible for someone else’s homelessness, the system we opt not to resist hard enough, is responsible. So because we have given in, paid taxes or gotten food stamps to survive, we have at least momentarily been part of the complex system that is the government and so we have been its elements and we are responsible for that, even if our responsibility may not be as much as, say, military lobbyists that help squander away all the social money.

Yes, it’s true that it sucks that a person can be born into a system where they are obligated to act, but that is the case. We are not in a situation where we can sit out and not be unethical. It’s the whole classic utilitarian argument, the “trolley problem,” about people on the train tracks and whether or not you pull the lever to save the greater number– and always, always someone says they just want to opt out of the whole thing and do nothing. Well, that just isn’t a good option because now more people are dead, and it is, in some sense, partly a consequence of your actions because you were able to do something and didn’t. It’s also the fault of the person who got this messed up situation started,  and more. And we can say the same thing about us now, extending the analogy. The person whose fault our earthly situation is, is nature. It’s nature’s fault. And we all have to pay for it, and there is nothing we can do, and that’s just a fact. Nature set up these laws that has made the development of life so harsh, and insisted that life develop so persistently. So the reason why you don’t just get to opt out is because you are alive. 16124204_901908749945160_867063840188137472_nAs long as you are alive, you have to act and make decisions in order to be ethical, and you can’t just step back and not participate and get some ethical conscientious objector decree. It just doesn’t work like that, because in the model where you do act, less people die and suffer, and that just can’t be argued with. So since we don’t get to opt out, we either give the homeless guy what we can and we contribute to altering parameters in the model that will create an outcome of less homelessness, or else, we don’t help them out and we get a model with outcomes of more homelessness as a consequence. That is one attraction of anarchotranshumanism– the focus on engineering our way out of these ossified utilitarian debates like the trolley problem.

People don’t think they have to act to push the system toward a more desirable state, and that means that only very few people are using their daily actions to affect the system on all fronts. This behavior presents a problem like the collective action problem. It creates a horrible burden that can’t be met by so few people participating in designing change. It is very difficult to pick and choose who will be helped and who will not– when people’s lives, health and safety are on the chopping block– simply because I am only one person; because I don’t see any way that we can alone fix the whole problem spawned by nature. So if it is true that the problem of collective action and people’s bounded rationality will damn a model that depends on a lot of people working together to achieve change, then that means by disseminating this systems thinking method (as well as using technology to attack bounded rationality), we could build a better world together.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s