Many problems are interesting because they invoke a nifty, or surprising solution, a ‘trick’. It is enough to see and understand the solution to be satisfied. These are pedagogically frustrating to less mathematically mature students who can’t see these tricks as quickly.

An example might be a classification of a representation of a lie algebra whose proof is particularly nice, and involves a clever argument and no brute force calculations.

Some problems are not particularly difficult or consequential, but feel very solvable and compel us as mathematicians to want to solve them ourselves, for fun or to prove we can do it.

For example, recently I was wondering when the following equation has (positive) integer valued solutions of the form

Where .

This doesn’t seem difficult, I’ll just need to sit down and think about it for a bit. I’m not so curious that I feel the need to look up the answer, but I am curious about the process of solving the problem.

Another example that requires more work is the problem of calculating derived quantities in the video game Kerbal Space Program. Such as the amount of ‘change in velocity’ needed to escape the atmosphere. Or the optimal trajectory for a rocket to ascend. This is one of the more inspiring applications of differential equations in my mind.

Some mathematics is interesting because it comes to a surprising conclusion. Examples might be the banach tarski ‘paradox’, or the limit of ratios of the look and say sequence. Or even more simply that there is a cross product on and . But not for any other .

It is inherently surprising that mathematics has nice results like this at all. It indicates some profundity in the discipline that motivates its study in a universal way. Having said that, please don’t talk to me about universal properties.

]]>

We can’t draw a tesseract accurately because it requires further projection onto a plane. Of course we can still determine what is and isn’t a tesseract after this projection. In this context, ‘projection’ is the standard ‘photographic projection’ shown. It is worth noting that I refer exclusively to perspectives (or choices of photographic projection) under which the far side is not eclipsed by nearer sides.

In particular, if we have a nice projection (luckily almost all photographic projections are nice), we can distinguish a tesseract, from a cube, from a square, from a line, from a point by looking at their two dimensional projections. It might seem odd that I mention lines and points, but these are exactly ‘cubes’ in one and zero dimensions. We might even call these a 1-cube or a 0-cube.

While playing around with these shapes (-cubes) on paper I came across an algorithm for generating projections of the cubes up to the 4-cube, or tesseract. I haven’t verified this algorithm for higher dimensional projections onto a plane due to the drawing quickly becoming cumbersome, I do think the algorithm is illustrative of the relationships between these shapes. I also do not claim that I am the first to come up with this, I am almost certainly not.

The algorithm involves drawing two copies of the -cube, for convenience they are drawn parallel to one another, and then drawing lines between corresponding vertices (single points that terminate and connect line segments) to obtain the cube. Which vertices correspond should be obvious.

The process works fine if you decide randomly which vertices correspond, but it is difficult to illustrate and interpret the resulting pictures. Note that this description of the algorithm is not at all rigorous.

For example, a line is two points (connected by a line). A cube is two squares with a line joining each of their corresponding vertices. A tesseract is drawn as two cubes with a line joining each vertex. When it is said that a ‘tesseract is to a cube as a cube is to a square’. This is exactly what can be inferred from that statement.

A pattern to observe, among others, is that a line has 2 vertices, a square has 4 line sides, a cube has 6 square faces, a tesseract has 8 cubic cells. A -cube presumably contains 10 regular four polytope (polytopic) cells.

The surprising thing about this is the linear growth of this particular quantity. Although quantities such as the “number of lines” or “the number of faces” do have growth of order , which would be expected.

This procedure is hopefully illustrative of the nature of higher dimensions, particularly for students. Provided it is clear that the picture is a projection onto a 2 dimensional object and not actually a higher dimensional object.

I intend to update this post upon finding sources that reference this procedure or generalisation to higher dimensions, and with actual drawings up to at least 4 dimensions.

]]>

Despite it’s scary name, the idea is very simple. You compute a lot of random points, say . Then check which ones satisfy a certain property, say , then you compute the ratio

It is so unsophisticated that it would be embarrassing to explain to someone who was not already familiar with it. The fact that it works (sometimes) is remarkable and a testament to the fact that applications of mathematics and statistics are not always complex.

Despite being long dignified with a name, it does not have the same gravitas or weight in it’s name as other methods like the ‘finite difference method’ or ‘Euler’s method’. It was after all named after a casino.

To my knowledge this is a unique naming convention, reserved for the most ad hoc of methods.

I’ll update this post if I come across the source with this phrase.

]]>Here I am just going to enumerate the problems with these basic notations and why similar notations do not work either, and then conclude with modern notations that are well suited to these operations.

Firstly is actually fine. The important property of addition, which is that addition is commutative, is respected by this notation.

In the abstract we say a binary operation (two arguments), is commutative on a set if for every , we have that . This is important for some objects were certain subsets are commutative with an operation, but the whole set is not.

In the case of real numbers, and complex numbers, all real and complex numbers commute with all others under addition and multiplication. We can however write notations for commutative operations that don’t commute when we consider naively ‘swapping’ their arguments.

Consider subtraction as written.

is the same as

But if we just swap the arguments, we get the following

is not the same as

The first notation also obscures the fact that subtraction doesn’t exist per se, and is just a domain extension of addition, considered as a binary operation.

Division is more interesting and more troublesome.We have the following standard notations for divided by .

These two have the same problem as subtraction notation, but it’s harder to write in such a way that the arguments do commute naively.

and

If I were to choose between these two notations , I’d choose the second. It is much clearer how to distribute operations over it. It is for example unintuitive that

but

This is what leads to the arbitrary seeming, and didactic, order of operations, which simply follow from the definitions of various operations without need for rote memorization or arguments about order of operations. For reference, I could not tell you what the order of operations is, but I don’t make mistakes because of this lack of knowledge.

The second notation makes this clearer as the following is much more intuitive. But again it does not commute if we swap arguments.

The best notation for division, at least for understanding what is going on mathematically, especially in a pre-algebra or pre-calculus context, is the following for divided by .

Normally I’d just write . The symbols here are just illustrative.

It would seem that “naively” this new notation doesn’t commute either. however $b^{-1}$ should be thought of as a number, rather than the outcome of an operation on , although strictly speaking, these are the same. The operation here should be thought of as multiplication.

is just the number $ latex \frac{1}{b}$, so naively ‘swapping’ the arguments here gives $a \times b^{-1} = b^{-1} \times a$ as written above.

This is illustrative that division by a number bigger than is just multiplication by a number smaller than .

This notation isn’t always practical, but I’d argue teaching it to students learning algebra initially will cut down drastically on the mistakes they make, and bypass having to rote learn order of operations altogether.

]]>

I have, for today only, been wondering about how to develop intuition regarding regular and normal Hausdorff spaces. Ignoring the more complex separation axioms here, let’s just say that a regular space has each disjoint pair of a point and a closed set are separable by open sets. A normal space is similar but for disjoint pairs of closed sets.

It was initially unclear to me why it would be the case that regular and normal spaces are in fact separate things. It seems that it would be difficult to construct a topology such that one could separate points and closed sets, but not closed sets and closed sets.

It occurred to me that in some sense, a normal space is more infinitely divisible than a hausdorff or regular space. In that I can fit the boundary of an open set into tighter spaces.

In the end I couldn’t construct a regular space that was not normal in a short enough time to justify thinking about a non-assessable problem, so I looked it up. I’ll provide some examples at the end of this rant.

An interesting fact came up while discussing the problem with some classmates. They could not come up with examples either, however one suggested taking a set and declaring each closed set to be an open set.

This is not a solution, but it has the interesting property that since a (non-finite) regular space must be $latex T^1$, every space has every singleton point as a closed set. Therefore such a condition on a space immediately yields the discrete topology. As this is the topology whose base is each singleton point.

This is odd, because to go from a relatively unstructured topology that is merely to the finest topology with such an innocuous condition is a bit counter-intuitive.

To segue into examples, my favourite example of a space is the topology given by all sets that contain a specific distinguished point , with the space.. This is not because is not closed, as it’s complement does not contain .

It is however as each pair of points admits an open set about one of them not containing the other.

Before giving some examples of regular and not normal spaces. The aim of giving these examples is to share the intuition I’ve been attempting to gain on these definitions. First here are some theorems, presented without proof

Munkres’s Topology book, p 200:

**Theorem 32.1** Every regular space with a countable basis is normal.

An immediate corollary is that for finite sets, every regular topology is also a normal topology, and so the two are equivalent because, in general, a normal topology is also a regular topology.

So any desired example must be non finite, and must also not admit a countable base.

Details of specific examples are non-trivial, in this sense such topologies are somewhat artificial.

Due to time constraints, these will be added a later date.

]]>

There were a lot of new experiences had. It was my first real encounter with Sydney Liberals and it was definitely an enlightening experience. I saw a great passion from the Liberal gay community for their only lower house representative, Bruce Knotley Smith, which impressed me a lot.

The parade itself was a bit confronting with the highlight in controversy being the preponderance of gimp suit furries. That is buff grown men in leather gimp suits with tails and dog ears.

The other parties had pretty disappointing floats. Both Labor and the Greens had their floats tie in to their ‘Newtown’ campaigna, which the Greens ultimately won. I thought this was a cop out, with the Liberal float being much more pluralistic. We even had our President of the upper house, rather than just target candidates.

One incident stuck out to me the most. A little boy, probably about four years old was there with his parents, wearing a Greens T-shirt for the parade. And the Liberal float had brought bubble blowers which caught this boy’s attention. There was this fantastic scene where grown adults, many who had important roles in the government, were teaching this little boy in a Greens T-shirt how to blow bubbles. It was very cute.

It captured an almost magical sense of humility and humanity and I wish I had taken a picture to share. Eventually the boy’s mother came and took him away, though she was nice about it. She was however also obviously uncomfortable.

Then some friends saw me on TV in the parade. Which was strangely gratifying.

]]>

**Let be respectively matrices. Show that is a linear transformation **

Recall that a linear transformation is a map such that and where are vectors and is a scalar constant.

First note that the map is defined only when that is when it has m rows and n columns.

Then observe that by linearity of matrices.

Let then by distributivity of matrices.

So the map is linear.

A comment that this is really a very trivial problem and just follows from the composition of matrices being a linear map, however the question is an intuition testing question and so requires a more detailed answer.

**Let be elements of a vector space . Prove that defined by**

**is a linear transformation.**

From the definitions:

and

Therefore the map is linear.

**Let be an matrix. Use the dimension formula to prove that the space of solutions of the linear system has dimension at least .**

The dimension formula tells us

.

Here the kernel of are the solutions to by definition of the kernel.

A matrix is a linear map . With

Therefore , then by dimension formula

**Prove that every matrix of rank 1 has the form , where are dimensional column vectors. How uniquely determined are these vectors?**

We need to show that every has rank one, and that every matrix has the form .

Let and

Then the row of is

Each row is in the span of therefore has rank one. It is useful to think of this as ‘the codomain of has dimension one’.

Let be an arbitrary rank 1 matrix. Then it’s rows are linear combinations of some vector . Therefore each row is of the form

If we assign the , into a vector . Then we get that

This construction depends on a choice of , which then determines . This choice could be any scalar multiple of a known . Thus the choice is not very unique at all, it is any real number other than zero.

There is however a canonical choice, which is

that is the ‘normal’ or ‘orthonormal’ element in the span of .

**(a) ****Let be vector spaces over a field . Show that the two operations **

* and *

*make the product set into a product vector space.*

contains the zero element as . Closed under addition as closed under addition. Closed under scalar multiplication as closed under scalar multiplication.

**(b) Let be subspaces of a vector space . Show that **

** where **

**is a linear transformation.**

and

therefore transformation is linear.

**(c) Express the dimension formula for in terms of dimensions of subspaces of .**

Think of as being represented by it’s matrix which takes vectors in to vectors in .

Note that and not necessarily

Where

Therefore

and

This expresses the dimension formula for in terms of subspaces of . Noting that .

]]>

**Problem 3.3.1**

**(a) **Prove that the scalar product of a vector with the zero element of the scalar field is the zero vector.

Since scalar multiplication is linear in vector fields.

**(b) **Prove that if is in the vector space then is in also.

Since * the span of is a subspace of .*

If * then there exists no linear combination such that .*

*This is false for therefore .*

**Problem 3.3.2**

*Which of the following subsets is a subspace of the vector space of matrices with coefficients in ?*

**(a)** symmetric matrices, i.e. .

Can change condition that to equivalent condition that .

Addition preserves space membership. gives matrix such that Obviously swapping gives , therefore symmetric.

scalar multiplication preserves symmetry and zero matrix is symmetric.

Therefore symmetric matrices form a vector subspace.

* (b) invertible matrices.*

Not a vector subspace as * matrix is not invertible.*

**(c)** upper triangular matrices.

Obviously if $latex A,B$ upper triangular then is upper-triangular as addition of matrices is elementwise.

is also upper triangular as multiplication by a scalar preserves zeroes.

The zero matrix is also an upper triangular matrix, as the definition requires only that if then and not that if then

]]>

**Problem 2.2.1 :**

Produce a multiplication table for .

I’ll leave this one for now as it is very tedious to implement in latex on wordpress with my current knowledge of the platform.

**Problem 2.2.2 : **

*Let be a set with an associative law of composition and with identity. Prove that the subset consisting of the invertible elements is a group.*

contains the identity, which is it’s own inverse so contains identity. Each element has an inverse by hypothesis. If then their product as the product of inverse is inverse of .

**Problem 2.2.3 : **

*Let be elements of a group .*

**(a)** Solve for , given .

**(b)** Suppose that . Does it follow that **(i)** ? Does it follow that **(ii)** $yxz = 1$?

It does not follow as group laws do not necessarily commute. Note that if then **(ii)** follows, and if and then **(i)** follows.

**Problem 2.2.4 : **

*Determine which of the following pairs are subgroups and which are not.*

**(a)** is a subgroup of .

**(b)** is a subgroup of

**(c)** The set of positive integers (with addition) is **not** a subgroup of

**(d)** The set of positive reals (with multiplication) is a subgroup of

**(e)** The set of matrices such that and otherwise, is **not** a subgroup of

**Problem 2.2.5 : **

Show that if a subgroup *$latex* H \subset G$ has an identity, then it must be the identity of * and show that the same statement is also true of inverse.*

Suppose * has identity and has identity . Then for we have that $. With the application of inverse we can show that these must be equal.*

Suppose * has inverse and has inverse . As identity in and are the same element then inverses must be equal as they describe the same map.*

**Problem 2.2.6 : **

*Let be a group. Define the opposite group () on the same underlying set with law of composition so that $a . Prove is a group.*

is a permutation of terms of a group operation and so must be associative as group operations are associative.

contains the identity element and .

contains each inverse element and each left inverse is a right inverse, so inverses exist and are unique.

If then and so closed under group operation. So is a group.

]]>More updates on this post to come.

**Problem 1.M.1 :**

*Let a matrix be given in the form*

*where each block is an matrix. Suppose is invertible and . Use block multiplication to prove that *

*. and give an example to show that this doesn’t hold if *

**Problem 1.M.2 :**

*Let be an matrix with . Prove that has no left inverse by comparing to to the square matrix obtained by adding rows of zeros to the bottom. *

**Problem 1.M.3 :**

*The trace of a square matrix is the sum of it’s diagonal entries. Show the following properties of the trace.*

**(a)**

**(b)**

**(c)** If is invertible then

thus **(a)**

therefore **(b)**

**Problem 1.M.4 : ** Show that the equation has no solution in real matrices .

We take the identity and use it to show a contradiction

note that

This implies that

It follows that

A contradiction. Therefore no such exist.

**Problem 1.M.5 :**

**Problem 1.M.6 :**

**Problem 1.M.7 :**

**Problem 1.M.8 :**

**Problem 1.M.9 :**

**Problem 1.M.10 :**

**Problem 1.M.11 :**