## Friends | Philosopher, composer (opera and classical), and writer (poetry and satire). sarah_spirit: It was a kiss to level mountains and shake stars from the sky...It was a kiss to make angels faint and demons weep… a passionate, demanding, soul-searing kiss that nearly knocked the earth off its axis.... —Lisa Kleypas Perspicuity to FlorSilvestre: Bringing you a chariot of light whispering through the sky, eternal, ecstatic, and beyond the iron fingers of time. Perspicuity to China99: In answer to your profile question: Because it is far better to live in a world of moral beauty. Perspicuity: My thought for today: The fundamental mathematical concept is fungibility. One interval, say between two and three, is exactly the same as any other (say between 97 and 98). One point is the same as another. One proton is the same as another. Anything that is fungible can be treated mathematically. From the economics of fungible dollars to the motion of fungible water molecules. Treating non-fungibles as fungible extends the range of mathematics, e.g., statistics treats non-fungible persons or physically diverse coin tosses as fungible units/fungible heads or tails. Computer science is mathematical because it is digital and not analogue, and each digit is fungible (one pixel is the same as another). What math always misses is the unique. Some numbers have unique properties, but all numbers can be generated by a recursive procedure, although recursion must begin somewhere, and that origin itself is beyond math, an ineffable given. (In a sense, zero/the empty set itself defies mathematics, which is perhaps why the introduction of zero was an imaginative leap and why zero produces paradoxes that must be simply skirted by fiat. 10/0 is not allowed. Here be dragons.) View all 7 posts pourmoi: I'm not a math expert per se, so forgive me if I come across as (one).Matter types differ, and in math we inherit property to a variable measure which could be BASE or DIV/1. Any time we relate to different matter the base value will assume we are having similar constraints derived in any mathematical product, that uses for example BASE 10 as a ground value. Why logical shifts and precedence work. If we did not have that common denominator, and fractions could not be devised of the same numbers, we could enhance our math exponentially. Yet, there would be no reference. No reference to any type we now consider a variant or mutated string in math, in other words a variable, yes. Basically what this tells us about the inherent principles, is that values are kept in any type of container, being an array, or a function that delivers a product, or any type of variable, boolean, or constant. When working with comparisons, or much rather when trying to divide one object through another we employ shifts in logical precedence, hence one interval is one step with a divisible number. It's logical in above posts interval is expressed as the value that sits in between, where in a different base system, the value would shift accordingly. If we agree here that fungibility is inherent in the system, where we take carbon from one equation to the next, the container (really array or variable here) it would contain a different size, and measure, or be equal. There are not many variations, yet they will occur when we apply functions/methods or assign a logical operator to mutate any number inside of any container. For instance [^2] inverse statement. The operator ^ is actually assigned one number, but the rule says it acts on all the numbers inside of its logical constraint according to precedence, which it has high priority and the same as multiply or divide. What actually happens is a type of unary shift of that number, it gets shifted [^2], the '2' goes to the left, and according to logic shifts in steps of (2). First 1->2. The actual value is defined from in matter of expression a typical xor statement, that internally shifts and combines 2 numbers, where the 2 will be duplicated according to the unary operator '^' the interpretations will differ when it is overloaded (a function tied to the operator). Where any operator +, -, *, /, ^ can be assigned any value, here again referring to a BASE/dec. In any type of variant of degree, the numbers will be divided and recombined according to a logical constraint, which is another base/1 value dec(2). So we have the array actually consists of [4], by having the Power of math work its magic, and of course this is Pow/2. Any fungibility we could infer here is that these numbers are integral, so they will fit in an array of INT[n] where 'n' is the number in the array of integers. When we have a BASE/3 value, you will see this logic is not the same for all numbers. There will only be 3 logical options. n(1) n(2) n(3), so to place a value in between would be arguably difficult and still keep it in a ground base of 3 Integers. Of course here we have floating point and double precision, to curb this measure, which actually decreases the limitation, or quanta any measure, for instance in pragma. If we have the same array of [^2] where the value is in our case a logical conjunction '^' will only test those numbers, yielding the same result, the actual value '2' will remain the same. It is much like the base number and already consists of 2 Integers, (1)(1) in a logical array[] with no logical constraint in the size of elements. To come to a value that can be fit in between, we have thus the choice, will we take Integers, Floating Point, Double, or (void) type reference assigned to operate on the number integral here. For instance floating point value between '1' and '2' would of course be 1.5 ... yet we see here the size of the array must expand, to hold the extra portion which is surplus .5. An Integer array would still assign as value [1] the same number of floating point 1.5... the remainder merely is forgotten when we try to convert it to Integer. What I'm trying to prove here is that it does not matter how we write down the 'marker' to be at 1.5 and 2.5 conversely using floating point numbers in a BASE/3 system. There will be only 3 assignable values. Because we take a variable, and add it to a logical container. We actually operate on the array instead of the number. [1][5] where this would represent the floating point number where we actually also assign insignificant space for the decimal dot(.), in any dotted notations. We see [1][5] can go to [2][0] and it can conversely go to [1][0] ... This is of course (1) and (2). Yet how do we define the remainder if we only had an array[ ] of element size 1. Of course math is flexible and we can put any number inside an array, and it would not matter. Yet, it tells us of the precedence. Where we draw it out as a function: 2 >>> 1 [ +2 ] = 2 yet the number would rest in a container. Making it really an absolute or constant number. To understand know that I already drew the accumulator function, or operator in the bounds of the array, and represented the actual value in the array. Yet, the precedence is still active. (2) is first shifted until it reaches the lowest Integer (1) and is (+) added to the array[ ]... and so forth the rest of the number is added, and accumulated. This is of course because the array is not a construct. We can find any value represented by several decimals as you know. Then arriving at the same decimal when you take a unit that is fungibility. The logical constraint of the array tells us, why? Well A) it can house any Integer value, so for a decimal portion of a floating point number, the array will have a rest which we could draw for instance as: [1] /5 Where we are having a function. Here we see that if 1 is divided as it says by 5 we'll have 0.2 ... Which is exactly my point, that the size of the remainder matters. And is not any modulus. The point being the function of [1]/5 can be applied in a different context where we actually take the remainder to divide with the number hence 0.5 ... we see it has no bounds as an array, yet, we know it yielded the same basic fraction in a BASE/10 system as we suspect of DIV/1. It would yield 2.0 ... and the 0 would be the remainder. Yet it is still a double number. Double being the measure that contains it. An array size of [2] or when we write the remainder [2][ ]. Or in refined method [2][0], we see that there is hardly any transition between dividing by a number and dividing by zero. To make a long story short, we assign to our [2][1] an increment of 0.1, actually a variant in the degree of statements happened here, it is a floating point number actually increases its tiny radius. So basically it's geometry, where the pie slice will fit nice into the circle, in BASE/10. Why there are variable BASES, different units, and type (void). I can assign any value in between [2] [n] /n. Where in BASE/3 this would leave us with no remainder if we inserted 1.5. Because A) the array shifts to three elements in size, called the address. This shift in math is automatic. Yet, we calculate (1) and we add it to the correct element, the first element with a value of (2) and address of &0. This address can hold any scalar number maybe. Why we inherit types to assign to and classify our data. Basically a unit or measure. Last bit of text............ If we have BASE/10 here [2] /0 and assign [1][5] (! not (15)) we'd have to increase the array, and of course which does not affect anything about the result. The same transposing happens in prime numbers, or any number that is placed in a different context. We only take into account the result of the actions, and call this label. So labeling a step between 1..2 as 1.5 and really the next number is 3. because we divide by halves of our product. Why any variable in such measure must be a constant. Logical as well, because if numbers would fluctuate. It most likely would degrade our entire foundation of math, as well as decimate our numbers. Any decreased value would not reinstate whole. So, 1 .. 1.5 .. 2 .. 2.5 .. 3 in logical steps, actually could be labeled much quicker and is already 1 .. 3. Then the fractions that occur are really a label for a matter in supposition. A matter that evolves. Still the same remainder in each step would have the same value or size, and adds in the same type of arrangement, in the same dimensions, also can partly be forgotten. Why we can take some carbon from one lung to the next, and it would not make any difference. In math we try to keep our head at numbers. Perspicuity: I just cut an enormous, deep red dahlia from my garden to grace my kitchen table. It reminds me of the beauty of mortality, that winter comes too soon for all of us but we can bloom fierce and glorious in the face it. Perspicuity: Musing upon this unseasonable cold, I decided that the purpose of cold weather is to make sure we don't feel too at home in the world. We are part of earth and our lives unfold in a physical world, but we are also part of a world of ideas and magic--our eyes turn quantum events into colors, our tongues turn chemical esters into tastes, and our hearts turn moments into eternities. We must not feel so at home in either world that we forget the other. iwooltheworld: Perhaps the cold represents our ultimate suffering and demise. Being dead—devoid of heat. Our worn bodies, covered in frost—immortalized in ice. Perspicuity to iwooltheworld: Sending you a copy of The Essays of Elia. (For those who don't know, the author is Lamb) View all 10 posts Perspicuity: thank you. It has been among my favorites since childhood (I used to visit it often in the Metropolitan Museum of Art) |

Perspicuity: The five fingers of wisdom are kindness, insight, aspiration, reflection, and serenity.LadyJustice: I Heard It somewhere loltstarr8481: ...