Die Daten decken einen weiten Wellenzahlbereich von 10 cm −1 und O…O-Bindungslängen von 2.44 bis 3.5 Å ab. Aufgrund seltener oder ungenauer Daten tiber Wasserstoffpositionen konnten nur 47 Datenpaare für die d(H…0)- v-Korrelation verwendet werden. Literaturzitate über 65 Minerale lieferten 125 Datenpaare für die d(O…O)- v-Korrelation. The trends of previous correlation curves and of theoretical considerations were confirmed.Įine Korrelation zwischen O-H-Streckfrequenzen (aus IR-spektroskopischen Messungen) und O…O- sowie H…O-Bindungslängen (aus Strukturdaten) von Mineralen wurde erstellt. bent or bifurcated geometry, dynamic proton behavior, but also due to factor group splitting and cationic effects, data scatter considerably around the regression line. Because of deviations from ideal straight H bonds, i.e. The correlation function was established in the form v(cm −1) = 3592-304 They originate from silicates, (oxy)hydroxides, carbonates, sulfates, phosphates, and arsenates with OH −, H 2O, or even H 3O − 2 units forming very strong to very weak H bonds. The data cover a wide range of wavenumbers from 1000 to 3738 cm −1 and O…O distances from 2.44 to 3.5 Å. References on 65 minerals yielded 125 data pairs for the d(O…0)- v correlation due to rare or inaccurate data on proton positions, only 47 data pairs were used for the d(H…O)- v correlation. ] tf.matmul(a, b) - ((1st axis of a), (0th axis of b))ġ6.0 tf.reduce_sum(tf.multiply(a, b)) - ((the last 2 axes of a), (the first 2 axes of b))ġ6.0 tf.A correlation of O-H stretching frequencies (from infrared spectroscopy) with O…O and R…O bond lengths (from structural data) of minerals was established. ] tf.einsum('ij,jk', a, b) - ((1st axis of a), (0th axis of b)) ] tf.matmul(a, b) - ((the last 1 axes of a), (the first 1 axes of b)) ] tf.einsum('ij,ki', a, b) - ((0th axis of a), (1st axis of b)) ] tf.einsum('ij,ik', a, b) - ((0th axis of a), (0th axis of b)) ]]] tf.einsum('ij,kl', a, b) - ((the last 0 axes of a), (the first 0 axes of b)) ![]() Print(f"\t tf.einsum('ij,ij->', a, b)\t\t- ((0th axis of a, 1st axis of a), (0th axis of b, 1st axis of b))") That's why I did equivalent operations in TF 2 for myself and decided to share them here: a = tf.constant() ![]() But they don't show actual math behind operations. The answers above are great and helped me a lot in understanding tensordot. Given the mix of dimension I don't think there's another combination. In : np.allclose(np.einsum('ijk,jim',A,B),np.tensordot(A,B,)) In : np.allclose(np.einsum('ijk,lim',A,B),np.tensordot(A,B,))Īnother, summing on two. If you are already comfortable with einsum then it will be simplest compare the results to that.Ī sample test, summing on 1 pair of axes In : np.tensordot(A,B,).shape tensor just means arrays with more than 2d. There's no special tensor math going on, just extending dot to work in higher dimensions. It may be easier to experiment than to explain. ![]() It then swaps and reshapes back to the target. Tensordot swaps axes and reshapes the inputs so it can apply np.dot to 2 2d arrays. ![]() We can extend this to as many axes as possible. Inputs : In : A = np.random.randint(2, size=(2, 6, 5)) Let's look at few sample cases with one and two axes of sum-reductions and also swap the input places and see how the order is kept in the output. The axes that take part in sum-reduction are removed in the output and all of the remaining axes from the input arrays are spread-out as different axes in the output keeping the order in which the input arrays are fed. The idea with tensordot is pretty simple - We input the arrays and the respective axes along which the sum-reductions are intended.
0 Comments
Leave a Reply. |