Difference between revisions of "Notes:Delta complex/Formal attempt"

From Maths
Jump to: navigation, search
(Saving work)
 
(Saving progress)
 
Line 16: Line 16:
 
** Our goal is to find a [[bijection]], say {{M|F:I(m,n)\rightarrow G(n,m)}}
 
** Our goal is to find a [[bijection]], say {{M|F:I(m,n)\rightarrow G(n,m)}}
 
===First stab===
 
===First stab===
 +
'''Definition: '''
 +
* The "gluing data" of a {{M|\Delta}}-complex corresponds to two parts:
 +
*# {{M|S_n(K)}} - the set of {{M|n}}-simplices of {{M|K}}
 +
*# The "gluing maps", {{M|G_f}}, which can be enumerated as follows:
 +
*#* Let {{M|m,\ n\in\mathbb{N}_0}} be given and be such that {{M|m\le n}}
 +
*#** Then for each {{M|f\in I(m,n)}} there exists a {{M|G_f:S_n(K)\rightarrow S_m(K)}} such that:
 +
*#**# If {{M|f\eq \text{Id}_{\#(n+1)} }} then {{M|G_f\eq\text{Id}_{S_n(K)} }}, and
 +
*#**# If {{M|f\in I(m,n)}} and {{M|g\in I(n,j)}} then {{M|G_{g\circ f}\eq G_f\circ G_g}}
 +
That's it!
 +
====Problems====
 +
# I need to form a statement (and then prove it) which shows that we need only consider {{M|m\eq k}} and {{M|n\eq k+1}} cases (for {{M|k\in\mathbb{N}_0}}) we don't need all of them, that statement 2 of the {{M|G_f}} function definition ensures the result is consistent. It's pretty obvious but I'm not sure how to phrase it.
 +
# I need to show that we have a Hatcher-{{M|\Delta}}-complex {{iff}} we have one of these.
 +
====Gluing process====
 
* Let {{M|m,n\in\mathbb{N} }} be given such that {{M|m\le n}}.
 
* Let {{M|m,n\in\mathbb{N} }} be given such that {{M|m\le n}}.
 
** Let {{M|f\in I(m,n)}} be given, so {{M|f:\#(m+1)\rightarrow\#(n+1)}} is an [[injection]] and is [[monotonic]] - as per the definition of {{M|I(m,n)}}.
 
** Let {{M|f\in I(m,n)}} be given, so {{M|f:\#(m+1)\rightarrow\#(n+1)}} is an [[injection]] and is [[monotonic]] - as per the definition of {{M|I(m,n)}}.
 
*** We associate {{M|f}} with {{M|L_f:\mathbb{R}^{m+1}\rightarrow\mathbb{R}^{n+1} }} which is a [[linear map]] defined by its action on a basis as {{M|L_f(e_i):\eq e_{f(i)} }} where {{M|e_i\in\mathbb{R}^\text{whatever} }} is a [[tuple]] that has {{m|0}} in every entry except the {{M|i^\text{th} }} which has {{M|1}}; as usual.<ref group="Note">There's some [[abuse of notation]] going on here, as if {{M|e_i\in\mathbb{R}^n}} then {{M|e_i\notin\mathbb{R}^m}} with {{M|m\neq n}} of course. We identify {{M|\mathbb{R}^m}} with a subspace of {{M|\mathbb{R}^n}} where {{M|n\ge m}} spanned by the first {{M|m}} basis vectors. It's not that big of a leap, so shouldn't require any more discussion</ref>
 
*** We associate {{M|f}} with {{M|L_f:\mathbb{R}^{m+1}\rightarrow\mathbb{R}^{n+1} }} which is a [[linear map]] defined by its action on a basis as {{M|L_f(e_i):\eq e_{f(i)} }} where {{M|e_i\in\mathbb{R}^\text{whatever} }} is a [[tuple]] that has {{m|0}} in every entry except the {{M|i^\text{th} }} which has {{M|1}}; as usual.<ref group="Note">There's some [[abuse of notation]] going on here, as if {{M|e_i\in\mathbb{R}^n}} then {{M|e_i\notin\mathbb{R}^m}} with {{M|m\neq n}} of course. We identify {{M|\mathbb{R}^m}} with a subspace of {{M|\mathbb{R}^n}} where {{M|n\ge m}} spanned by the first {{M|m}} basis vectors. It's not that big of a leap, so shouldn't require any more discussion</ref>
 
**** It is fairly easy to see that {{M|\text{Ker}(M_f)\eq\{0\} }}, then by "''[[a linear map is injective if and only if its kernel is trivial]]''" and "''[[the image of a linear map is a vector subspace of the codomain]]''" wee see that:
 
**** It is fairly easy to see that {{M|\text{Ker}(M_f)\eq\{0\} }}, then by "''[[a linear map is injective if and only if its kernel is trivial]]''" and "''[[the image of a linear map is a vector subspace of the codomain]]''" wee see that:
***** {{M|L_f:\mathbb{R}^{m+1}\rightarrow L_f(\mathbb{R}^{m+1})}} is a [[linear isomorphism]]
+
***** {{M|L_f':\mathbb{R}^{m+1}\rightarrow L_f(\mathbb{R}^{m+1})}} is a [[linear isomorphism]]
**** As {{M|\mathbb{R}^{m+1} }} is finite dimensional we see that {{M|L_f}} is a [[continuous map]]
+
**** As {{M|\mathbb{R}^{m+1} }} is finite dimensional we see that {{M|L_f'}} is a [[continuous map]], so forth. As would be {{M|L_f}} itself of course.
 
+
**** Notice that {{M|L_f'\vert_{\Delta^m}:\Delta^m\rightarrow \text{Some }m\text{-face of }\Delta^n }}
 +
***** and that this is a [[homeomorphism]] onto its image.
 +
**** This is the idea of our "gluing map" we see we glue some {{M|m}}-face of an {{M|n}}-simplex to some {{M|m}}-simplex that we already have.
 +
***** Define {{M|G_f:S_n(K)\rightarrow S_m(K)}} by {{M|G_f:\sigma\mapsto\text{the }m\text{-simplex to which the }m\text{-face of }\sigma\text{ given by }f\text{ corresponds to} }}
 +
(see paper notes. Will write this again later)
  
  

Latest revision as of 14:36, 6 February 2017

Formal attempt

We try and keep everything combinatorial, so keep an abstract simplicial complex in the back of your mind, and a simplex as being like {a,b,c} for a triangle and such.

Notations:

  • Let #(n):={1,,n}N - I did want to use C(n) for "count" or "consecutive" but given the context that'd be a poor choice!
    • Consider #(n) as a poset in its own right (in fact a total order is in play) with the "usual" ordering on N it inherits. This is a standard substructure construction.
  • Let K be our Delta complex, let us sidestep defining exactly what this is now, as a tuple of sets.
  • Let Sn(K) be the set of n-simplices of K
  • Let I(m,n) be defined to be equal the collection of all injective monotonic functions of the form f:#(m+1)#(n+1)[Note 1]
    • The +1 comes from the definition: Dim(σ):=|σ|1N - we take care with the case σ= as I'm developing a framework including this and come up with 2 "null objects" that do not alter the theory, for now Dim()=1 will do. It wont matter.
  • Δm be the standard m-simplex in Rm+1
  • G(n,m) - this is our goal, it's a collection of a bunch of maps of the form G:Sn(K)Sm(K) {{Caveat|Notice the flip of n and m) with certain properties.
    • Our goal is to find a bijection, say F:I(m,n)G(n,m)

First stab

Definition:

  • The "gluing data" of a Δ-complex corresponds to two parts:
    1. Sn(K) - the set of n-simplices of K
    2. The "gluing maps", Gf, which can be enumerated as follows:
      • Let m, nN0 be given and be such that mn
        • Then for each fI(m,n) there exists a Gf:Sn(K)Sm(K) such that:
          1. If f=Id#(n+1) then Gf=IdSn(K), and
          2. If fI(m,n) and gI(n,j) then Ggf=GfGg

That's it!

Problems

  1. I need to form a statement (and then prove it) which shows that we need only consider m=k and n=k+1 cases (for kN0) we don't need all of them, that statement 2 of the Gf function definition ensures the result is consistent. It's pretty obvious but I'm not sure how to phrase it.
  2. I need to show that we have a Hatcher-Δ-complex if and only if we have one of these.

Gluing process

  • Let m,nN be given such that mn.
    • Let fI(m,n) be given, so f:#(m+1)#(n+1) is an injection and is monotonic - as per the definition of I(m,n).
      • We associate f with Lf:Rm+1Rn+1 which is a linear map defined by its action on a basis as Lf(ei):=ef(i) where eiRwhatever is a tuple that has 0 in every entry except the ith which has 1; as usual.[Note 2]
        • It is fairly easy to see that Ker(Mf)={0}, then by "a linear map is injective if and only if its kernel is trivial" and "the image of a linear map is a vector subspace of the codomain" wee see that:
          • Lf:Rm+1Lf(Rm+1) is a linear isomorphism
        • As Rm+1 is finite dimensional we see that Lf is a continuous map, so forth. As would be Lf itself of course.
        • Notice that Lf|Δm:ΔmSome m-face of Δn
        • This is the idea of our "gluing map" we see we glue some m-face of an n-simplex to some m-simplex that we already have.
          • Define Gf:Sn(K)Sm(K) by Gf:σthe m-simplex to which the m-face of σ given by f corresponds to

(see paper notes. Will write this again later)


Notes

  1. Jump up This basically means:
    • x,y#(m+1)[x<yf(x)<f(y)] - notice the strict ordering used here. This ensures that it is 1-to-1. We can never have equality of f(x) and f(y)
      • Caveat:Not proved yet
        TODO: Do the proof!
  2. Jump up There's some abuse of notation going on here, as if eiRn then eiRm with mn of course. We identify Rm with a subspace of Rn where nm spanned by the first m basis vectors. It's not that big of a leap, so shouldn't require any more discussion